Introduction to DevOps Principles
Introduction to DevOps Principles
YEAR III
SEMESTER V
REGULATIONS 2022R
COURSE CODE CY3V15,CB3V15
COURSE NAME DEVOPS
1
Course Code/Title:CY3V15/Devops Unit: I
“DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of
agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture),
and seeks to improve collaboration between operations and development teams. DevOps implementations
utilize technology— especially automation tools that can leverage an increasingly programmable and
dynamic infrastructure from a life cycle perspective.”
DevOps is defined as a combination of processes and tools created to facilitate organizations in delivering
services and applications much faster than they can through conventional software development processes.
It helps increase customers’ confidence in the applications that an organization offers, thereby allowing the
company to flourish and achieve its business goals faster.
2
Course Code/Title:CY3V15/Devops Unit: I
Benefits of DevOps
DevOps helps organizations deliver added value to their customers. Here are some compelling benefits
of DevOps.
1. Smarter work and faster release: With DevOps, your development team can release the required
deliverables quickly. Faster release of deliverables will keep you miles ahead of your competitors,
which is very important in today’s cut-throat business realm. Businesses should understand that if
their review cycle is not automated, it will slow down the release process. Moreover, the inclusion
of disparate tools will lead to context switching and higher costs. Thus, DevOps can help rectify
this worrisome business situation.
2. Quick resolution of issues: In a business world where speed and accuracy are paramount, a fast
feedback loop will help you thrive. With DevOps, the communication process becomes
seamless, and, as such, it minimizes the time required to solve issues. Without open
communication, key issues can slip out of mind, which will have serious repercussions in the
long run. DevOps fosters open communication that helps resolve issues, thus unblocking the
release pipeline faster.
3
Course Code/Title:CY3V15/Devops Unit: I
3. Better collaboration between teams: DevOps paves the way for more dynamic and round-the- clock
communication between teams. It renders an environment for mutual
collaboration and integration among teams that are distributed globally. Eliminating the traditional
departmental barriers between teams forms a new sense of ownership, wherein each team member
feels equally responsible for meeting delivery timelines. This collaboration contributes to happier
and more engaged employees.
4. Fostering innovative mindsets: With DevOps, deployment phases of the application are more
relaxed as compared to traditional methods. This is because it streamlines the entire process,
ensures that there are no lapses in quality, and allows on-time and efficient release. Thus, as
everything is in order, the development team is more at peace. This allows it to think out of the
box and provide additional value to the user. Having a development team with an innovative
mindset is a boon for any business organization. An innovative approach, in itself, has immense
scope and leads to better quality and resolution of issues at hand. Thus, through DevOps, the
process of expanding the horizon of an app becomes much easier.
5. Faster threat detection: Automated and continuous testing of the code will make the process of
threat detection faster. As developers can locate problem areas at an early stage, they can then
resolve them faster. Thus, DevOps is a vital cog in maintaining and enhancing the quality and
performance of an app. As the overall build of the app is in capable hands, teams working
together are empowered to share feedback as and when necessary.
6. Increased customer satisfaction: Customer satisfaction is paramount in any day and age,
irrespective of the business one is involved in. DevOps is known for enhancing customer
experience, which ultimately increases the level of customer satisfaction. Dissatisfied customers
are never a good sign for any business. Feedback loops are an important component of DevOps.
These loops empower end users to track the progress of app development at various stages.
7. In addition, they can suggest changes (if any) or give their inputs to make the app more customer-
centric. Due to their dynamic nature, feedback loops help developers and customers remain on the
same page. Moreover, DevOps accelerates the process of app development, which eventually
lessens the delivery timer. This has a positive impact on the customer satisfaction ratio.
8. Providing the much-needed edge: Along with staying true to their development process,
companies need to ensure that they sustain themselves in the cut-throat competition.
Implementing DevOps can be your trump card to provide your organization with that much-
needed edge. Competitive advantage is necessary, as it can become the deciding factor in the
popularity of an application in many cases. Some factors set expert businesses apart from
mediocre ones:
• Top-quality features
4
Course Code/Title:CY3V15/Devops Unit: I
Example: Facebook’s mobile app which is updated every two weeks effectively tells users you can have
what you want and you can have it. Now ever wondered how Facebook was able to do social smoothing?
It’s the DevOps philosophy that helps facebook and sure that apps aren’t outdated and that users get the
best experience on Facebook. Facebook accomplishes this true code ownership model that makes its
developers responsible that includes testing and supporting through production and delivery for each kernel
of code.
They write and update their true policies like this but Facebook has developed a DevOps culture and has
successfully accelerated its development lifecycle.
Difference between Agile and DevOps:
S. No. Agile DevOps
1 It started in the year 2001. It started in the year 2007.
2 Invented by John Kern, and Martin Fowler. Invented by John Allspaw and Paul
Hammond at Flickr, and the Phoenix
Project by Gene Kim.
3 Agile is a method for creating software. It is not related to software development.
Instead, the software that is used by
DevOps is pre-built, dependable, and
simple to deploy.
4 An advancement and administration approach. Typically a conclusion of administration
related to designing.
5 The agile handle centers on consistent changes. DevOps centers on steady testing and
conveyance.
6 A few of the finest steps embraced in Agile are DevOps to have a few best hones that
recorded underneath - 1. Backlog Building ease the method – 1. Focus on specialized
[Link] advancement greatness. 2. Collaborate
straightforwardly with clients and join
their feedback
7 Agile relates generally to the way advancement DevOps centers more on program
is carried of, any division of the company can arrangement choosing the foremost
be spry in its hones. This may be accomplished dependable and most secure course.
through preparation.
8 All the group individuals working in a spry DevOps features a diverse approach and
hone have a wide assortment of comparable is very viable, most of the time it takes
ability sets. This is often one of the points of after “Divide and Conquer”. Work
interest of having such a group since within the partitioned among the improvement and
time of requirement any of the group operation groups.
individuals can loan help instead of holding up
for the group leads or any pro impedances.
9 Spry accepts “smaller and concise”. Littler the DevOps, on the other hand, accepts that
group superior it would be to convey with “bigger is better”.
fewer complexities.
10 Since Agile groups are brief, a foreordained DevOps, on the other hand, prioritizes
sum of time is there which are sprints. Tough, it reliabilities. It is since of this behavior
happens that a sprint has endured longer than a that they can center on a long-term plan
month but regularly a week long. that minimizes commerce's unsettling
influences.
11 A big team for your project is not required. It demands collaboration among different
teams for the completion of work.
5
Course Code/Title:CY3V15/Devops Unit: I
6
Course Code/Title:CY3V15/Devops Unit: I
DevOps Tools
1. Git (GitLab, GitHub, Bitbucket)
Git remains indispensable in software development and DevOps due to its pivotal role in version control,
collaborative coding, and efficient project management. As technology has accelerated, the need for
streamlined and organized code management has never been greater.
Git empowers developers to collaborate on codebases, effortlessly creating and merging branches for new
features and bug fixes. Its distributed nature ensures developers can work seamlessly offline, an increasingly
valuable feature in today's remote and distributed work environments.
Additionally, Git facilitates the tracking of code modifications, making it easier to identify when and why
specific changes were made, a critical aspect of maintaining code quality and security. Software
development is essential in driving innovation and advancing progress, and Git maintains its prominent
position as the bedrock of efficient, cooperative, and secure coding methodologies.
2. Maven
Due to its enduring significance in managing project dependencies, building, and project lifecycle
management, Maven remains a pivotal tool in SD and DevOps. As a robust build automation and project
management tool, Maven simplifies the complexities of Java-based project development by streamlining the
compilation, testing, packaging, and distribution processes. It ensures consistent and reproducible builds,
making it easier for development teams to collaborate efficiently and deliver high-quality software.
Maven's role in managing dependencies and facilitating continuous integration and deployment remains
crucial. Its ability to handle complex build scenarios and integrate seamlessly with modern DevOps
practices makes it indispensable for ensuring software projects' reliability, maintainability, and scalability
in 2024 and beyond.
3. Jenkins
Its importance lies in its role as a powerful automation server that enables continuous integration and
continuous delivery (CI/CD) pipelines. Jenkins streamlines software development by automating tasks
such as building, testing, and deploying code changes, ensuring that software is delivered quickly and
highly. With the growing complexity of modern applications, the need for efficient CI/CD processes has
become even more paramount.
Jenkins provides flexibility, extensibility, and a vast library of plugins that cater to a wide range of
technologies and tools, making it adaptable to diverse development environments. As organizations
prioritize speed, reliability, and collaboration in their software development practices, Jenkins stands as a
cornerstone tool, enabling teams to achieve seamless automation and efficient delivery of software solutions.
4. Chef
Chef, a powerful automation platform, is crucial in managing infrastructure as code. Chef empowers
organizations to achieve scalability, reliability, and speed seamlessly. By allowing the automation of server
provisioning, configuration, and maintenance, Chef enhances efficiency and consistency across the entire
infrastructure, reducing manual errors and ensuring that infrastructure remains desired.
Moreover, Chef integrates smoothly with various cloud providers, containerization technologies, and other
DevOps tools, making it adaptable to the ever-evolving tech
7
Course Code/Title:CY3V15/Devops Unit: I
landscape. As organizations prioritize agility and scalability, Chef remains a vital tool
in automating complex infrastructure tasks and enabling DevOps teams to focus on innovation and
delivery.
5. Puppet
Puppet is essential because it simplifies the management and orchestration of complex IT infrastructures
by allowing administrators to define infrastructure as code. It ensures consistency and repeatability in
configuration across servers, cloud instances, and containers. Businesses increasingly rely on diverse,
dynamic, and hybrid infrastructures.
Puppet's importance lies in its ability to streamline provisioning, configuration, and continuous compliance,
thus reducing operational complexity, minimizing errors, and accelerating software delivery. Puppet
continues to empower organizations to efficiently manage and scale their infrastructure while maintaining
high levels of security and compliance, making it a crucial tool for DevOps teams.
6. Ansible
Ansible is a powerful and widely adopted automation and configuration management tool important in 2024
for several reasons. This tool stands out for its simplicity and versatility. It empowers organizations to
automate repetitive tasks, provisioning of infrastructure, and configuration management across diverse
environments, making it an invaluable asset for DevOps and IT teams.
Furthermore, Ansible's agentless architecture, declarative language, and a vast library of pre- built modules
make it accessible to both beginners and seasoned professionals. As organizations prioritize efficiency,
scalability, and the rapid deployment of applications and services, Ansible remains an indispensable
DevOps toolkit, helping teams streamline operations, enhance security, and maintain infrastructure at scale,
all while reducing manual errors and increasing agility in a fast-paced technological landscape.
7. Docker
Docker is crucial in modern software development and DevOps practices. It can simplify and streamline
the management of applications across various environments. Docker containers encapsulate an app and its
dependencies, ensuring consistent and reproducible deployments from development to production.
This technology enhances portability and scalability, accelerates development cycles, and reduces the "it
works on my machine" problem. In a rapidly evolving software landscape, Docker's containerization
approach remains crucial for achieving efficient, isolated, and highly flexible application deployment,
making it an essential component of DevOps and continuous delivery pipelines.
8. Kubernetes
Kubernetes, often abbreviated as K8s, play a central role in modern software development and operations. Its
importance lies in its ability to orchestrate, manage, and automate containerized applications at scale. As
organizations increasingly embrace microservices architectures and containerization for their applications,
Kubernetes provides the essential infrastructure for deploying, scaling, and maintaining these containers
efficiently.
The tool’s resilience, self-healing capabilities, and support for hybrid and multi-cloud environments make it vital
for achieving agility, reliability, and cost-effectiveness in application deployment. It serves as the backbone of
cloud-native ecosystems, enabling organizations to accelerate software delivery, improve resource utilization,
and respond effectively to the evolving demands of the digital landscape.
8
Course Code/Title:CY3V15/Devops Unit: I
9. Slack
Slack is a crucial tool for businesses and organizations worldwide. Its significance lies in facilitating
seamless communication and collaboration among teams, whether working in the same office or remotely.
Slack's real-time messaging, file sharing, and integration capabilities streamline workflow, enhance
productivity and keep teams connected across different time zones and locations.
As the work landscape evolves, with more companies embracing hybrid and remote work models, Slack is
a vital hub for quick decision-making, project coordination, and knowledge sharing. With an ever-
expanding ecosystem of integrations and features, Slack remains at the forefront of modern workplace
communication, making it essential for businesses to stay agile, efficient, and competitive.
9
Course Code/Title:CY3V15/Devops Unit: I
14. Appdynamics
AppDynamics, a leading application performance management and monitoring platform, remains critically
important as it ensures the optimal performance of modern digital businesses. As organizations rely on
complex and distributed software systems, proactively monitoring, troubleshooting, and optimizing these
applications becomes essential.
AppDynamics provides real-time visibility into application performance, allowing businesses to swiftly
identify bottlenecks, latency issues, and errors.
With the ever-growing complexity of applications, the importance of AppDynamics lies in its ability to
empower organizations to deliver exceptional user experiences, maintain application reliability, and
swiftly respond to performance issues, thereby ensuring the continued success and competitiveness of
digital businesses.
15. Raygun
It is a crucial tool in software development and DevOps because it ensures application reliability and
performance. Raygun is an application monitoring and error-tracking platform that empowers development
teams to identify, diagnose, and resolve real-time issues.
With software systems growing in complexity and the increased demand for seamless user experiences,
Raygun's importance lies in providing actionable insights into application errors and performance
bottlenecks. It enables organizations to proactively address issues, reduce downtime, and enhance user
satisfaction, leading to higher software quality and improved customer experiences.
Software is central to businesses across industries. Raygun's role in maintaining application health and
facilitating rapid issue resolution makes it a fundamental tool for DevOps professionals and software
developers.
10
Course Code/Title:CY3V15/Devops Unit: I
17. Selenium
It remains a vital tool in software testing and automation due to its enduring relevance in ensuring the quality
of web applications. As technology evolves, web applications become increasingly complex, requiring
thorough testing across various browsers and platforms.
With its robust automation capabilities and extensive browser compatibility, Selenium allows developers
and QA teams to automate repetitive testing tasks efficiently, conduct cross- browser testing, and ensure
that web applications function flawlessly across diverse environments.
Its open-source nature, active community support, and integration with other DevOps tools make Selenium
a go-to choice for organizations striving for continuous delivery and the rapid deployment of high-quality
software, a cornerstone of modern software development practices.
18. Gremlin
Gremlin is an essential tool in chaos engineering, which has become increasingly critical for ensuring the
resilience and reliability of modern software systems. As technology advances and complex distributed
systems become the norm, the potential for unexpected failures and outages also rises.
Gremlin allows organizations to proactively identify weaknesses and vulnerabilities in their infrastructure
and applications by simulating controlled failures, such as network disruptions, service outages, and resource
constraints.
By intentionally inducing chaos and monitoring the system's response, teams can uncover weaknesses
before they lead to costly downtime or security breaches. Gremlin facilitates organizations to build more
robust, fault-tolerant systems that can withstand real-world challenges and deliver uninterrupted services
to users.
19. ServiceNow
ServiceNow is a vital platform for organizations seeking to streamline their IT service management and
beyond. Its significance lies in its ability to provide a unified, cloud-based solution for automating and
optimizing various business processes, including ITSM, ITOM, HR, customer service, and more.
Due to the rapid digitization of services, remote work, and the growing complexity of technology
infrastructures, ServiceNow offers a comprehensive approach to managing workflows, resolving issues, and
delivering services efficiently. Its intelligent automation capabilities, analytics, and AI-driven insights
empower organizations to enhance productivity, agility, and customer satisfaction while reducing
operational costs.
ServiceNow's role in orchestrating and integrating diverse systems and processes makes it an indispensable
tool for driving digital transformation and ensuring smooth operations in the ever-evolving business
landscape of 2024.
11
Course Code/Title:CY3V15/Devops Unit: I
Downtime can often lead to significant financial losses and damage to a company's reputation, so having
a practical Status Page becomes not just a convenience but a necessity. It allows organizations to showcase
their commitment to transparency and responsiveness in addressing service-related issues, ultimately
fostering stronger customer relationships and trust.
23. Scripting
Scripting remains vital due to its pivotal role in automating and streamlining various aspects of software
development, system administration, and DevOps practices. Scripting languages like Python, Bash, and
PowerShell empower tech professionals to write code that can execute repetitive tasks, manipulate data,
and orchestrate complex processes efficiently.
Scripting facilitates rapid prototyping, configuration management, and the creation of automated
deployment pipelines. It enhances productivity, ensures consistency and reduces human error in tasks
ranging from software testing and deployment to infrastructure provisioning and monitoring. As
organizations increasingly embrace DevOps and cloud- native technologies, scripting stays competitive and
adaptive in the tech landscape.
24. Terraform
Terraform plays a crucial role in modern infrastructure provisioning and management. It allows
organizations to define and deploy infrastructure as code, enabling the automated creation and configuration
of cloud resources, containers, and other infrastructure components. Cloud computing, microservices, and
containerization have become the norm in
12
Course Code/Title:CY3V15/Devops Unit: I
2024. Terraform provides the agility and scalability required to keep up with the dynamic demands of modern
applications.
Terraform’s importance lies in its ability to bring consistency, version control, and automation to
infrastructure operations, thereby reducing manual errors, streamlining DevOps workflows, and
facilitating applications' rapid and reliable deployment in an increasingly complex and cloud-centric
environment. As organizations adopt cloud-native
technologies, Terraform remains essential to ensure efficient and consistent infrastructure management.
25. Phantom
Phantom enhances security automation and incident response capabilities. In today's rapidly evolving
threat landscape, organizations face a constant barrage of cybersecurity incidents, and the ability to
respond swiftly and effectively is necessary. It provides a platform for automating security workflows,
from detecting and investigating potential threats to orchestrating responses and mitigating risks.
Phantom’s importance lies in its capacity to reduce response times, increase consistency in incident handling,
and free up manual resources from repetitive tasks. With the growing complexity of cyber threats, Phantom
empowers security teams to defend against attacks and safeguard critical assets proactively.
26. Nagios
Nagios, an open-source monitoring and alerting system, remains vital due to its enduring significance in
maintaining the reliability and performance of IT infrastructure and applications. Organizations increasingly
rely on complex systems and services. Nagios plays a crucial role by providing real-time monitoring and
alerting capabilities, allowing IT
teams to detect and address issues before they impact users or cause system outages.
Its versatility, extensibility, and support for both on-premises and cloud environments make Nagios a
valuable tool for ensuring critical systems' availability, stability, and security, aligning perfectly with the
demands of modern IT operations and DevOps practices.
27. Vagrant
Vagrant continues to play a crucial role in software development and DevOps. It is a tool that simplifies
creating and managing reproducible development environments. Its importance lies in its ability to provide
developers and DevOps teams with a consistent and isolated environment for software development, testing,
and deployment.
With the ever-evolving complexity of software stacks, dependencies, and infrastructure configurations,
Vagrant remains essential in ensuring these environments are easily shareable, scalable, and maintainable. It
allows developers to work seamlessly across various operating systems and provides a standardized setup
that minimizes compatibility issues.
28. Sentry
Sentry plays a critical role in modern software development and DevOps practices. With software
applications' increasing complexity and scale, identifying and addressing errors and issues has become
crucial.
Sentry is vital because it provides real-time error tracking and monitoring, allowing development teams to
proactively detect and diagnose issues, whether they occur in production or during development. Its
importance is minimizing downtime, improving user experience, and maintaining software systems' overall
health and reliability.
13
Course Code/Title:CY3V15/Devops Unit: I
29. Gradle
Gradle continues to be a vital tool in software development and DevOps. Gradle is an advanced build
automation system that plays a crucial role in managing dependencies, building projects, and orchestrating
complex workflows efficiently. Its importance lies in its versatility and scalability, as it caters to various
project sizes and types.
Gradle's ability to easily handle multi-language, multi-project builds and its support
for plugin-based customization make it indispensable in modern software development. As organizations
increasingly adopt microservices architectures and cloud-native technologies, Gradle's capabilities are
instrumental in managing the complexity of building, testing, and deploying applications across diverse
environments.
DevOps Architecture
Development and operations both play
essential roles in order to deliver
applications. The deployment comprises
analyzing the requirements, designing,
developing, and testing of the software
components or frameworks.
DevOps architecture is used for the applications hosted on the cloud platform and large distributed
applications. Agile Development is used in the DevOps architecture so that integration and delivery can be
contiguous. When the development and operations team works separately from each other, then it is time-
consuming to design, test, and deploy. And if the terms are not in sync with each other, then it may cause a
delay in the delivery. So DevOps enables the teams to change their shortcomings and increases productivity.
14
Course Code/Title:CY3V15/Devops Unit: I
2) Code: Many good practices such as Git enables the code to be used, which ensures writing the code for
business, helps to track changes, getting notified about the reason behind the difference in the actual and the
expected output, and if necessary reverting to the original code developed. The code can be appropriately
arranged in files, folders, etc. And they can be reused.
3) Test: The application will be ready for production after testing. In the case of manual testing, it consumes
more time in testing and moving the code to the output. The testing can be automated, which decreases the
time for testing so that the time to deploy the code to production can be reduced as automating the running
of the scripts will remove many manual steps.
4) Plan: DevOps use Agile methodology to plan the development. With the operations and development
team in sync, it helps in organizing the work to plan accordingly to increase productivity.
5) Monitor: Continuous monitoring is used to identify any risk of failure. Also, it helps in tracking the
system accurately so that the health of the application can be checked. The monitoring becomes more
comfortable with services where the log data may get monitored through many third-party tools such as
Splunk.
6) Deploy: Many systems can support the scheduler for automated deployment. The cloud management
platform enables users to capture accurate insights and view the optimization scenario, analytics on trends
by the deployment of dashboards.
7) Operate: DevOps changes the way traditional approach of developing and testing separately. The teams
operate in a collaborative way where both the teams actively participate throughout the service lifecycle.
The operation team interacts with developers, and they come up with a monitoring plan which serves the IT
and business requirements.
8) Release: Deployment to an environment can be done by automation. But when the deployment is made
to the production environment, it is done by manual triggering. Many processes involved in release
management commonly used to do the deployment in the production environment manually to lessen the
impact on the customers.
DevOps Lifecycle
DevOps defines an agile relationship between operations and Development. It is a process that is practiced
by the development team and operational engineers Learning DevOps is not complete without understanding
the DevOps lifecycle phases. The DevOps lifecycle includes seven phases as given below:
15
Course Code/Title:CY3V15/Devops Unit: I
1) Continuous Development
This phase involves the planning and coding of the
software. The vision of the project is decided
during the planning phase. And the developers
begin developing the code for the application.
There are no DevOps tools that are required for
planning, but there are several tools for maintaining
the code.
2) Continuous Integration
This stage is the heart of the entire DevOps lifecycle. It is a software development practice in which the
developers require to commit changes to the source code more frequently. This may be on a daily or weekly
basis. Then every commit is built, and this allows early detection of problems if they are present. Building
code is not only involved compilation, but it also includes unit testing, integration testing, code review,
and packaging.
The code supporting new functionality is continuously integrated with the existing code. Therefore, there is
continuous development of software. The updated code needs to be integrated continuously and smoothly
with the systems to reflect changes to the end-users.
production server.
3) Continuous Testing
This phase, where the developed software is continuously testing for bugs. For constant testing, automation
testing tools such as TestNG, JUnit, Selenium, etc are used. These tools allow QAs to test multiple code-
bases thoroughly in parallel to ensure that there is no flaw in the functionality. In this phase, Docker
Containers can be used for simulating the test environment.
It may occur in the form of documentation files or maybe produce large-scale data about the application
parameters when it is in a continuous use position. The system errors such as server not reachable, low
memory, etc are resolved in this phase. It maintains the security and availability of the service.
Automation testing saves a lot of time and effort for executing the tests instead of doing this manually. Apart
from that, report generation is a big plus. The task of evaluating the test cases
16
Course Code/Title:CY3V15/Devops Unit: I
that failed in a test suite gets simpler. Also, we can schedule the execution of the test cases at predefined
times. After testing, the code is continuously integrated with the existing code.
4) Continuous Monitoring
Monitoring is a phase that involves all the operational factors of the entire DevOps process, where important
information about the use of the software is recorded and carefully processed to find out trends and identify
problem areas. Usually, the monitoring is integrated within the operational capabilities of the software
application.
5) Continuous Feedback
The application development is consistently improved by analyzing the results from the operations of the
software. This is carried out by placing the critical phase of constant feedback between the operations and
the development of the next version of the current software application.
The continuity is the essential factor in the DevOps as it removes the unnecessary steps which are required
to take a software application from development, using it to find out its issues and then producing a better
version. It kills the efficiency that may be possible with the app and reduce the number of interested
customers.
6) Continuous Deployment
In this phase, the code is deployed to the production servers. Also, it is essential to ensure that the code is
correctly used on all the servers. The new code is deployed continuously, and configuration management
tools play an essential role in executing tasks frequently and quickly. Here are some popular tools which are
used in this phase, such as Chef, Puppet, Ansible, and SaltStack.
Containerization tools are also playing an essential role in the deployment phase. Vagrant
and Docker are popular tools that are used for this purpose. These tools help to produce consistency across
development, staging, testing, and production environment. They also help in scaling up and scaling down
instances softly.
Containerization tools help to maintain consistency across the environments where the application is tested,
developed, and deployed. There is no chance of errors or failure in the production environment as they
package and replicate the same dependencies and packages used in the testing, development, and staging
environment. It makes the application easy to run on different computers.
7) Continuous Operations
All DevOps operations are based on the continuity with complete automation of the release process and
allow the organization to accelerate the overall time to market continuingly. It is clear from the discussion
that continuity is the critical factor in the DevOps in removing steps that often distract the development, take
it longer to detect issues and produce a better version of the product after several months. With DevOps, we
can make any software product more efficient and increase the overall count of interested customers in your
product.
17
Course Code/Title:CY3V15/Devops Unit: I
AWS
AWS stands for Amazon Web Services, It is an expanded cloud computing platform provided by Amazon
Company. AWS provides a wide range of services with a pay-as-per-use pricing model over the Internet
such as Storage, Computing power, Databases, Machine Learning services, and much more. AWS facilitates
for both businesses and individual users with effectively hosting the applications, storing the data securely,
and making use of a wide variety of tools and services improving management flexibility for IT resources.
1. Cost savings: One of the biggest benefits of AWS is that it can help businesses save money. As
mentioned previously, businesses can avoid the high upfront costs of traditional infrastructure
with AWS and pay only for the resources they use. Traditionally, businesses had to invest in
hardware and software upfront, which often led to overspending.
Let’s look at this for example – if a business needs to run a website that gets 1000 visitors per day, they
would need to purchase and maintain enough servers to support this traffic. With AWS, the business
only pays for the compute resources they use when someone visits their website. This can result in
significant cost savings.
2. Flexibility: Another key benefit of AWS is its flexibility. Businesses are able to customize their
virtual environment – whether the operating system, database, programming language, or
something else – to meet their specific needs. Especially in today’s climate, the migration
process to the cloud should be as frictionless as possible – and AWS makes that possible.
Regardless of your use case or
industry, AWS can be tailored to fit your needs, whether you’re looking for a single cloud- hosted
application or an entire suite of integrated solutions.
3. Reliability: AWS is known for being reliable, with an uptime of 99.9%. This makes it a great
platform for mission-critical applications that need to be available 24/7. AWS also offers the
ability to deploy resources across multiple availability zones for even greater reliability. The cloud
platform also has a number of features that make it easier to ensure reliability, such as autoscaling
and auto-healing. Autoscaling allows businesses to automatically scale their resources up or down
based on demand, while auto-healing enables them to quickly identify and replace any faulty
components.
4. Security: Businesses can take advantage of advanced security features, such as identity and access
management, to help protect their data. Their tough infrastructure with an end-to-end approach is
designed to withstand attacks and AWS provides customers with tools to help them monitor and
respond to threats. When it comes to storage, Amazon S3 provides customers with a secure and
reliable way to store and access data. The service is designed to be highly scalable and resilient,
with built-in redundancy. Fine-grain identity and access controls can be applied to S3 buckets and
objects, giving customers control over who has access to their data. Security tasks can be
automated with AWS CloudFormation, making it easier for businesses to manage their security
policies. And, you can rest easy knowing that AWS takes privacy seriously, with comprehensive
customer data protection and compliance measures.
18
Course Code/Title:CY3V15/Devops Unit: I
5. Compliance: By compliance, we mean that certain businesses are required to follow specific
regulations. Financial services companies in the United States, for example, must comply with the
Sarbanes-Oxley Act, while healthcare, education, and energy companies must comply with
HIPAA and other regulations. AWS provides a number of compliance-related features and
services, such as data encryption and identity and access management, to help businesses meet
these requirements.
7. Developer Tools: Developer tools are designed to make it easier for developers to create,
deploy, and manage applications – and AWS provides developers with what they need to build
applications quickly and easily. By leveraging developer
tools, developers can save time and money by automating tedious tasks. They also benefit from
access to AWS’s extensive library of pre-built applications that can help them get their projects
off the ground quickly. Services such as Amazon Elastic Beanstalk and Amazon CloudFormation
can help them automate the process of creating and deploying applications.
Other ways developers can improve productivity with AWS include using AWS Code Commit to
store and manage source code.
8. Integration: Thanks to its many integrations with other Amazon services, as well as third-party
services, AWS makes it easy for businesses to get started with cloud computing. AWS provides
a wide range of services that can be easily integrated into existing business infrastructure. This
allows businesses to add new features and capabilities without having to make major changes or
invest in new hardware or software. For instance, if a business wants to add mobile capabilities
to its website,
it can take advantage of Amazon’s Mobile SDK and Web Services. These tools allow businesses
to quickly develop and deploy mobile apps that connect directly with their existing infrastructure.
9. Management Console: The AWS management console is a web-based interface that provides
users with a simple way to interact with and manage their AWS resources – essentially a place
where you can access and manage everything on the cloud. It provides a graphical view of all the
resources associated with an account, as well as tools for creating and configuring new resources.
Compared to traditional command- line interfaces, the AWS management console saves time
and makes it easier for users to get the most out of their AWS services. Not only that, but your
business gets access to 350+ free digital training courses through the AWS Academy, covering
topics such as cloud fundamentals, DevOps, security, and big data.
This means you can train your employees on how to use AWS, and in turn, help them become
more efficient at their jobs.
19
Course Code/Title:CY3V15/Devops Unit: I
10. Scalability: With an on-demand service, businesses can quickly spin up new servers as needed
with just a few clicks. This makes it much easier to scale resources up or down as demand
changes, allowing businesses to save costs and maintain performance even during peak periods.
For example, if a business is expecting a sudden surge in traffic due to an advertising campaign
or seasonal event, they can easily add more capacity to their server infrastructure to handle the
increased load. Bru Textiles, a specialty textile company in Belgium, was able to quickly scale
its infrastructure by leveraging AWS. Bru Textiles went digital to grow and offer new services.
Embracing technology, they brought in digital twin technology to give their customers an idea of
the texture and essence of their physical fabrics.
AWS Applications
• Storage and Backup: Storage and backup are important for any Cloud Computing service. AWS
provides you with reliable storage services like Amazon Simple Storage Service to store large-
scale data and backup services like AWS Backup to take backups of this data, which is stored in
other AWS services. AWS stores the data in three different availability zones so that if one fails,
you can still access your data. This makes AWS storage reliable and easily accessible. Therefore,
companies with huge application data to store and backup securely can use AWS.
• Big Data: One of the biggest challenges faced by companies these days is Big Data. The companies
are struggling to store their large amounts of data using traditional methods. With AWS Big Data
storage services, they can manage to store their data even if the data limit increases unexpectedly
as AWS provides virtually unlimited data storage with scale-in and scale- out options. AWS offers
easy access and faster data retrieval as well. For data processing, it offers services like EMR, with
which the companies can easily set up, operate, and scale their big data. Therefore, efficiently
storing and managing Big Data is among the top AWS applications.
• Enterprise IT: AWS is a one-stop solution for any IT business. Many features of it such as
secure storage, scalability, flexibility, and elasticity support companies to innovate faster than
ever before. Using AWS for IT enterprises makes them profitable in terms of both money and
time. As AWS maintains its cloud architecture, it need not waste time and money on
professionals to do the same.
• Social Networking: Social networking is essential for businesses in the present-day scenario
where Digital Marketing is key, and it is easier with AWS. Companies can connect with
customers and stakeholders and communicate through social networking sites and develop their
business. Services like AWS social networking engine, which is powered by Turn Key
GNU/Linux (HVM) AMI stack, are used for performance and scalability to help companies build
a suitable social networking site and gain profits.
• Mobile Apps: Mobile applications are embedded with day-to-day life. With AWS, you have the
facility to create an app in your desired programming language. You can also keep up the
applications that are consistently accessible and solid with high compute, storage, database, and
application services. You can take advantage of AWS auto-scaling and managed relational
database service for the better performance of your apps.
• Websites: AWS offers a wide range of website hosting options to create the best website for
customers. Its services like Amazon Light sail have everything, such as a virtual machine, SSD-
based storage, data transfer, DNS management, and a static IP, to launch a website in such a way
that the user can manage the website easily. Amazon EC2, AWS Lambda, Elastic Load Balancing,
AWS Amplify, Amazon S3, etc. also help users build reliable and scalable websites.
20
Course Code/Title:CY3V15/Devops Unit: I
• Gaming: AWS has been serving many gaming studios. Combining Amazon EC2 and S3 services
with CloudFront enables gaming websites to deliver high-quality gaming experiences to their
customers regardless of location.
• Netflix
Netflix is an entertainment platform that started in the United States, but eventually, it expanded to many
countries and soon became popular. However, once Netflix confronted the scalability problem because of
the sudden increase in viewers. That made Netflix choose AWS services. Netflix reports that when it started
using AWS services like DynamoDB and Cassandra for its distributed databases, it could handle the data
easily. So, scalability is a great advantage of AWS. Netflix has adapted around 100,000 server instances
from AWS for computing and storage databases, analytics, recommendation engines, and video transcoding
as well.
• McDonald’s
McDonald’s is the world’s largest fast-food company that serves around 64 million people per day. The
growth of this company has gone to another level when it started home deliveries. By utilizing AWS
services, McDonald’s created a platform that integrates local restaurants with delivery partners such as Uber
Eats. Scalability is also a reason for the company to choose AWS services. Moreover, with AWS
Microservices Architecture, McDonald’s platform can scale 20,000 orders per second and integrate with the
global partners easily.
• Airbnb
Airbnb is an international online marketplace for rental homes. This platform connects people who are
looking for rental accommodation with those who want to rent out their houses. Quite soon, Airbnb became
unable to handle the constant streaming of data on the website from its customers. That is when it started
using Amazon EC2 service and Elastic Load Balancing, which distributes incoming traffic to multiple
Amazon EC2 instances. In this way, Airbnb could avoid traffic, and customers could use the online platform
without any disruption.
• Novartis
Novartis is the best example for AWS use cases in healthcare. Novartis is one of the world’s largest
healthcare companies that provides solutions for patients’ well-being. It adapted Amazon EC2 services and
built a platform using other services such as Amazon Simple Storage Service, Amazon Elastic Block Store,
and four availability zones. Data Analysts of Novartis are taking advantage of the AWS services and still
implementing new solutions for the patients.
• Expedia
Expedia is a worldwide online travel agency that has always focused on the constant development and
innovation of its platform to offer an extraordinary user experience for its clients. Since 2010, Expedia has
been using AWS services to build a standard deployment model for better infrastructure as AWS offers the
best data security through different availability zones.
• Samsung
If you are using Samsung mobile phones, then you may know about the Samsung app store. For setting up
the apps stacked in its store, the company started using AWS services. Using AWS app development
services, Samsung wanted to provide its customers with the facility to download the apps anywhere without
any network traffic.
• NASA
NASA (National Aeronautics and Space Administration) has always wondered about creating a library to
present people with all its achievements through pictures and videos of space. Later on, it created such
platforms, but because it had 10 different NASA centers, it couldn’t provide the best experience for viewers.
So, all it wanted was to create an easy-access platform for
21
Course Code/Title:CY3V15/Devops Unit: I
people to search for and view images and videos. Then, NASA started adopting many services from AWS
to solve this problem, which included Amazon Elastic Compute Cloud, Elastic Load Balancing, Amazon
Simple Storage Service, Amazon Simple Queue Service, etc. Among these, Amazon S3 helped the company
store all the incoming data such as photos, videos, and audio files without any hassle.
• Facebook
Facebook, without a doubt, is a widespread social media platform. To build a scalable application, Facebook
used services such as Amazon Elastic Compute Cloud, Amazon Simple Storage Service, Amazon Relational
Database Service, Amazon SimpleDB, Amazon CloudFront, Amazon Simple Queue Service, etc. Amazon
RDS helps the platform to make it easy to set up, operate, and scale the database in the cloud.
The Important Cloud Services according to various categories that are provided by AWS are given below :
1. Compute
• Amazon EC2: Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides
secure, resizable compute capacity in the cloud. It allows organizations to obtain and configure
virtual compute capacity in the cloud. You can select from a variety of operating systems and
resource configurations like memory, CPU, and storage that are required for your
22
Course Code/Title:CY3V15/Devops Unit: I
application. Amazon EC2 enables you to increase or decrease capacity within minutes. You can
use one or hundreds or even thousands of server instances simultaneously. Because this is all
controlled with web service APIs, your application can automatically scale itself up and down
depending on its needs. Amazon EC2 is integrated with most AWS services, such as Amazon
Simple Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), and
Amazon Virtual Private Cloud (Amazon VPC) to provide a complete, secure solution for
computing applications. Amazon EC2 is an example of Infrastructure as a Service(IaaS). EC2
delivers secure, reliable, cost-effective compute and high-performance compute infrastructure so
as to meet the needs of demanding businesses. Amazon EC2 is one of the easiest ways of providing
servers on AWS Cloud and also the access to Operating system.
• AWS Lambda: AWS Lambda is a serverless, event-driven compute service that allows you to run
code without managing servers. You pay only for the compute time you consume and there is no
charge when your code is not running. With AWS Lambda, you can run code for any type of
application with zero administration. Just upload your code, and Lambda takes care of everything
required to run and scale your code with high availability. You can set up your code to
automatically trigger from other AWS services, or you can call it directly from any web or
mobile app. But triggering Lambda is possible with over 200 AWS services. You can only pay for
what you have used. The compute time that you consume, you are needed to pay for it. You just
only need to upload your code and everything required to run will take care of by Lambda and it
automatically scales your code with high availability.
• AWS Elastic Beanstalk: AWS Elastic Beanstalk is a Platform as a Service that facilitates quick
deployment of your applications by providing all the application services that you need for your
application. Beanstalk is a plug- and-play platform that allows working with multiple
programming languages and environments. Elastic Beanstalk supports a large range of platforms
like Node js, Java, PHP, Python, and Ruby. So, you can develop your application to meet your
requirements and simply deploy it on Elastic Beanstalk. The main aim to use AWS Elastic
Beanstalk is to allow you to focus on the deployment and management of your applications. You
can simply upload your code, and AWS Elastic Beanstalk automatically handles the deployment,
from capacity provisioning, load balancing, and auto-scaling to application health monitoring. At
the same time, you retain full control over the AWS resources powering your application and can
access the underlying resources at any time.
2 . Networking
• Amazon VPC: Amazon VPC is your network environment in the cloud. It allows you to create a
private network within the AWS cloud that uses many of the same concepts and constructs as an
on-premises network. Amazon VPC also gives you complete control of the network configuration.
Customers can define normal networking configuration items such as IP address ranges, subnet
creation, route table creation, network gateways, and security settings. Amazon VPC is an AWS
foundational service and integrates with numerous AWS services. For instance, Amazon EC2
instances are deployed into your
23
Course Code/Title:CY3V15/Devops Unit: I
Amazon VPC. Similarly, Amazon Relational Database Service (Amazon RDS) database
instances deploy into your Amazon VPC, where the database is protected by the structure of
the network just like your on-premises network. You can easily launch AWS resources into a
virtual network by Amazon Virtual Private Cloud. An isolated virtual network environment in
the AWS cloud is created by Amazon VPC.
• Amazon Route 53: Amazon Route 53 is a highly available and scalable cloud Domain Name
System (DNS) web service. It is designed to give developers and businesses an extremely reliable
and cost-effective way to route end users to Internet applications by translating human-readable
names, such as [Link], into the numeric IP addresses that computers use to
connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.
2. Storage
• Amazon S3 (Simple Storage Service): Amazon Simple Storage Service (Amazon S3) is object
storage with a simple web service interface to store and retrieve any amount of data from
anywhere on the web. It is designed to provide an infinite amount of storage and it is delivered
with 99.999999999% durability. You can use Amazon S3 as primary storage for cloud-native
applications as a target for backup and recovery and disaster recovery. It offers industry-leading
scalability, data availability, security, and
performance. It’s simple to move large volumes of data into or out of Amazon S3 with Amazon’s
cloud data migration options. Once data is stored in Amazon S3, it can be automatically tiered into
lower cost, longer-term cloud storage classes like Amazon S3 Standard – Infrequent Access and
Amazon Glacier for archiving.
• Amazon Glacier: Amazon Glacier is a secure, durable, and extremely low- cost storage
service for data archiving and long-term backup. Data stored in Amazon Glacier takes several
hours to retrieve, which is why it’s ideal for archiving. The fastest access to your archive data
is via Amazon Glacier.
3. Databases
• Amazon RDS (Relational Database Service): Amazon Relational Database Service (Amazon
RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-
efficient and resizable capacity while managing time-consuming database administration tasks,
freeing you up to focus on your applications and business. You can find Amazon RDS is also
available on several database instance types – optimized for memory, performance, or I/O.
Amazon RDS provides you with six familiar database engines to choose from, including Amazon
Aurora, PostgreSQL, MySQL, MariaDB, Oracle, and Microsoft SQL Server.
• Amazon DynamoDB (Non-Relational Database): Amazon DynamoDB is a fast and flexible
NoSQL database service for all applications that need consistent, single-digit millisecond latency
at any scale. It is a fully managed database and supports both document and key-value data models.
When you create a database table that can store and retrieve any amount of data you can simply
use Amazon DynamoDB that will serve any level of requested traffic. Its flexible data model and
reliable performance make it a great fit for mobile, web, gaming, Internet of Things (IoT), and
many other applications. DynamoDB provides many features like
• built-in security
24
Course Code/Title:CY3V15/Devops Unit: I
• backups
• automated multi-region replication
• in-memory caching
• data export tools.
The following are the components that make up the AWS infrastructure:
o Availability Zones
o Region
o Edge locations
o Regional Edge Caches
o An availability zone is a facility that can be somewhere in a country or in a city. Inside this facility,
i.e., Data Centre, we can have multiple servers, switches, load balancing, firewalls. The things which
interact with the cloud sits inside the data centers.
o An availability zone can be a several data centers, but if they are close together, they are counted as
1 availability zone.
Region
25
Course Code/Title:CY3V15/Devops Unit: I
o A region is a collection of data centers which are completely isolated from other regions.
o A region consists of more than two availability zones connected to each other through links.
o Availability zones are connected through redundant and isolated metro fibers.
Edge Locations
o Edge locations are the endpoints for AWS used for caching content.
o Edge locations consist of CloudFront, Amazon's Content Delivery Network (CDN).
o Edge locations are more than regions. Currently, there are over 150 edge locations.
o Edge location is not a region but a small location that AWS have. It is used for caching the content.
o Edge locations are mainly located in most of the major cities to distribute the content to end users
with reduced latency.
o For example, some user accesses your website from Singapore; then this request would be
redirected to the edge location closest to Singapore where cached data can be read.
o AWS announced a new type of edge location in November 2016, known as a Regional Edge Cache.
o Regional Edge cache lies between CloudFront Origin servers and the edge locations.
o A regional edge cache has a large cache than an individual edge location.
o Data is removed from the cache at the edge location while the data is retained at the Regional Edge
Caches.
o When the user requests the data, then data is no longer available at the edge location. Therefore, the
edge location retrieves the cached data from the Regional edge cache instead of the Origin servers
that have high latency.
What Is Cloud Computing?
Cloud computing is the use of hardware and software components in an off-premises location to deliver a
service to a network. Users can access files and applications from any device that can access the internet.
• Cloud providers can pull the computing resources to provide services to multiple customers
with the help of a multi-tenant model
26
Course Code/Title:CY3V15/Devops Unit: I
• Servers are maintained easily and there is nearly zero downtime
• Users can access cloud data and upload it on the cloud from any device with a solid
internet connection
• Cloud environments can be modified according to the user's requirements and is easily
accessible
• It offers pay-as-you-go pricing, meaning you only pay for the resources you use
1. Infrastructure as a Service: IaaS delivers virtualized computing resources over the Internet.
Users can rent virtual machines, storage, and networking infrastructure, allowing for easy
scalability without investing in physical hardware. Examples include AWS EC2 and
Azure Virtual Machines.
2. Platform as a Service: PaaS offers a robust platform for developers to build, deploy, and manage
apps without worrying about the underlying infrastructure. It simplifies application development
and deployment, with services like Google App Engine and Heroku leading the way.
3. Software as a Service: SaaS offers software applications on a subscription basis, accessible via a
web browser. Users don't need to install or maintain software locally, making it ideal for
collaboration tools (e.g., Microsoft 365, Google Workspace) and CRM systems (e.g., Salesforce).
4. Function as a Service: FaaS allows developers to execute code responding to events without
managing servers. It's highly scalable and cost-efficient, exemplified by AWS Lambda and
Azure Functions. FaaS is also known as serverless computing.
5. Container as a Service: CaaS enables the deployment and management of containerized
applications using orchestration tools like Kubernetes. It provides portability and scalability for
applications across different cloud environments.
27
Course Code/Title:CY3V15/Devops Unit: I
It is a cloud
It is a service model It is a service model in
computing model that
that provides cloud computing that
delivers tools that are
Model virtualized computing hosts software to make
used for the
resources over the it available to clients.
development of
internet.
applications.
There is no
Some knowledge is requirement about
Technical It requires technical
required for the basic technicalities
understanding. knowledge.
setup. company handles
everything.
28
Course Code/Title:CY3V15/Devops Unit: I
Enterprise AWS virtual private
Microsoft Azure. IBM cloud analysis.
services. cloud.
Outsourced cloud [Link],
Salesforce Gigaspaces. AWS, Terremark
services.
Operating System,
Runtime, Data of the Nothing
User Controls
Middleware, and application
Application data
It is highly scalable to It is highly scalable to
It is highly scalable suit the different suit the small, mid and
Others businesses according
and flexible. enterprise level
to resources. business
• Compute
• Networking
• Storage and Databases
• Big Data
• Machine Learning
• Identity & Security
• Management and Developer Tools
1. Compute: GCP provides a scalable range of computing options you can tailor to match your needs.
It provides highly customizable virtual machines. and the option to deploy your code directly or via
containers.
2. Networking: The Storage domain includes services related to networking, it includes the following
services
3. Storage and Databases: The Storage domain includes services related to data storage, it includes
the following services
29
Course Code/Title:CY3V15/Devops Unit: I
4. Big Data: The Storage domain includes services related to big data, it includes the following
services
• Google BigQuery
• Google Cloud Dataproc
• Google Cloud Datalab
• Google Cloud Pub/Sub
5. Cloud AI: The Storage domain includes services related to machine learning, it includes the
following services
6. Identity & Security: The Storage domain includes services related to security, it includes the
following services
7. Management Tools: The Storage domain includes services related to monitoring and
management, it includes the following services
• Stackdriver
• Monitoring
• Logging
• Error Reporting
• Trace
• Cloud Console
8. Developer Tools: The Storage domain includes services related to development, it includes
the following services
• Cloud SDK
• Deployment Manager
• Cloud Source Repositories
• Cloud Test Lab
AZURE
Azure is Microsoft’s cloud platform, just like Google has its Google Cloud and Amazon has its Amazon
Web Service or AWS.000. Generally, it is a platform through which we can use Microsoft’s resources.
For example, to set up a huge server, we will require huge investment, effort, physical space, and so on.
In such situations, Microsoft Azure comes to our rescue. It will provide us with virtual machines, fast
processing of data, analytical and monitoring tools, and so on to make
30
Course Code/Title:CS3V15/Devops Unit: I
our work simpler. The pricing of Azure is also simpler and cost-effective. Popularly termed as “Pay As
You Go”, which means how much you use, pay only for that.
31
Course Code/Title:CS3V15/Devops Unit: I
10. Security + Identity: Includes Security Center, Azure Active Directory, Key Vault, and
Multi-Factor Authentication Services.
11. Developer Tools: Includes cloud development services like Visual Studio Team Services,
Azure DevTest Labs, HockeyApp mobile app deployment and monitoring, Xamarin cross-
platform mobile development, and more.
32
Course Code/Title:CS3V15/Devops Unit: I
Difference between AWS (Amazon Web Services), Google Cloud, and Azure
AWS Google Cloud Azure
Pricing Per hour — rounded Per minute — rounded Per minute — rounded
up. up up.
Models On demand, reserved spot. On demand — Per minute- rounded up
sustained use. commitments(Pre-paid or
monthly)
GIT
Git is a distributed version control system (DVCS) that helps manage and track changes in source code
during software development. It was created by Linus Torvalds in 2005 and has become one of the most
widely used version control systems in the software development industry.
Some key concepts and features of Git:
• Version Control: Git allows developers to keep track of changes made to their code over time.
This includes modifications, additions, and deletions of files.
• Distributed System: Git is a distributed version control system, meaning that each developer
has a complete copy of the entire repository, including its full history. This allows developers to
work independently and merge their changes when necessary.
• Branching: Git enables developers to create branches, which are essentially separate lines of
development. This allows for the parallel development of features or bug fixes without affecting
the main codebase.
33
Course Code/Title:CS3V15/Devops Unit: I
• Merging: Git provides tools for merging changes from one branch into another. This is essential
when multiple developers are working on different branches and need to bring their changes
together.
• History Tracking: Git maintains a complete history of changes made to the codebase. Developers
can view, revert, or analyze changes made over time.
• Remote Repositories: Git supports remote repositories, allowing developers to collaborate with
others by pushing and pulling changes to and from a shared repository. Platforms like GitHub,
GitLab, and Bitbucket provide hosting services for Git repositories.
• Staging Area: Git uses a staging area (also known as the index) to prepare and review changes
before committing them to the repository. This allows developers to selectively include or exclude
specific changes.
• Open Source: Git is an open-source tool, and its source code is freely available for
modification and distribution.
Various GIT Components:
Git is composed of several key components that work together to enable version control and collaborative
development. Here are the main components of Git:
• Repository (Repo): A repository is a directory or storage space where your project and its
version history are stored. It contains all the files and directories associated with your project,
along with the metadata and configuration information.
• Working Directory: The working directory is the directory on your local machine where you
manipulate files and make changes to your project. It is essentially your local copy of the
repository.
• Index (Staging Area): The index, also known as the staging area, is a middle ground where
changes are prepared before being committed to the repository. It allows you to selectively stage
changes, which means you can choose which modifications to include in the next commit.
• Commit: A commit is a snapshot of the changes made to the files in the repository. It represents a
specific point in the project's history and is accompanied by a commit message that describes the
changes.
• Branch: A branch is a parallel line of development within a repository. It allows developers to
work on different features or bug fixes simultaneously without affecting the main codebase.
Branches can be merged to incorporate changes into other branches.
• Head: HEAD is a reference to the latest commit in the currently checked-out branch. It
essentially points to the tip of the branch you are currently on.
• Remote: A remote is a version of the repository stored on a different server. Git supports
collaboration by allowing developers to push and pull changes between their local repository and
remote repositories. Platforms like GitHub, GitLab, and Bitbucket are examples of remote
repositories.
• Clone: Cloning is the process of creating a copy of a remote repository on your local machine.
This allows you to start working on your own copy of a project.
• Fetch: The fetch operation retrieves changes from a remote repository but does not
automatically merge them into your working directory. It is useful for reviewing changes
before deciding to merge.
• Pull: Pull is a combination of fetch and merge. It retrieves changes from a remote repository
and automatically merges them into your working directory.
34
Course Code/Title:CS3V15/Devops Unit: I
Push: Push is the operation that sends your committed changes to a remote repository, making
them accessible to others.
Git workflow
Git Flow is a structured branching model designed for projects with well- defined release cycles and a need
for strict quality control.
Branches:
The branching model described is commonly known as the Gitflow Workflow. It's a branching strategy that
defines a strict branching model designed to facilitate collaboration and streamline the release process. Let's
go into detail about each branch:
1. Master Branch: The `master`branch represents the main codebase and contains production-
ready code. This branch is typically stable and should only include thoroughly tested and
approved changes. Each commit on the `master` branch represents a new version or release of
the software.
2. Develop Branch: The `develop` branch is an integration branch where various feature branches
are merged. It serves as a staging area for testing new features and ensuring they work well
together before merging into the `master` branch. This branch may have ongoing development
work and is not necessarily always in a production- ready state.
35
Course Code/Title:CS3V15/Devops Unit: I
3. Feature Branches: Feature branches are created for developing new features or implementing
changes. These branches are typically based on the `develop`branch. Once a feature is complete,
the branch is merged back into the `develop` branch. Feature branches allow developers to work
on specific tasks without affecting the main codebase.
4. Release Branch: The `release` branch is created when the `develop` branch reaches a point where
it is ready for a production release. This branch is used for final testing, bug fixes, and preparing
the code for deployment. No new features should be added to the release branch. Once the release
is deemed stable, it is merged into both the `master` branch and the `develop` branch.
5. Hotfix Branch: The `hotfix` branch is used to quickly address critical issues or bugs in the
production code. It is created directly from the `master` branch. Hotfixes are intended to be small
and focused on resolving the specific issue at hand. Once the hotfix is complete, it is merged
into both the `master`branch and the
• The release branch is merged into both `master` and `develop` once it's stable.
• The hotfix is merged into both `master` and `develop`to keep both branches in sync.
• This Gitflow Workflow helps maintain a structured development process, ensuring that features are
developed, tested, and released in a controlled manner.
Example Scenario:
• Imagine you’re working on a large software project with a team of 10 developers. You have a
major release planned for every six months.
• Once the feature is complete, it’s merged into the “develop” branch for integration and testing.
• As the release date approaches, you create a “release/v1.0” branch to freeze code for the
upcoming release.
36
Course Code/Title:CS3V15/Devops Unit: I
GIT INSTALLATION
• Run the following commands to configure your Git username and email using the following
commands, replacing Emma's name with your own. These details will be associated with any
commits that you create:
• Initialize a Repository:
Syntax: `git init` Example:
`git init`
• Clone a Repository:
Syntax: `git clone <repository_url>`
• Commit Changes:
Syntax: `git commit -m "Commit message"`
Example: `git commit -m "Add new feature"`
37
Course Code/Title:CY3V15/Devops Unit: I
branch feature-branch`
• Switch to a Branch:
Syntax: `git checkout <branch_name>`
Example: `git checkout feature-branch` OR
Syntax: `git switch <branch_name>` (Git version 2.23 and later)
Syntax: `git switch -c <new_branch_name>` (Git version 2.23 and later) Example: `git
switch -c new-feature`
38
Course Code/Title:CS3V15/Devops Unit: I
GITHUB
GitHub is an increasingly popular programming resource used for code sharing. It's a social networking
site for programmers that many companies and organizations use to facilitate project management and
collaboration. According to statistics collected in October 2020, it is the most prominent source code host,
with over 60 million new repositories created in 2020 and boasting over 56 million total developers.
GitHub is a Git repository hosting service that provides a web-based graphical interface. It is the world’s
largest coding community. Putting a code or a project into GitHub brings it increased, widespread
exposure. Programmers can find source codes in many different languages and use the command-line
interface, Git, to make and keep track of any changes. GitHub helps every team member work together on
a project from any location while facilitating collaboration. You can also review previous versions created
at an earlier point in time.
GitHub’s Features?
1. Easy Project Management: GitHub is a place where project managers and developers come
together to coordinate, track, and update their work so that projects are transparent and stay on
schedule.
2. Increased Safety With Packages Packages can be published privately, within the team, or
publicly to the open-source community. The packages can be used or reused by downloading
them from GitHub.
3. Effective Team Management GitHub helps all the team members stay on the same page and
organized. Moderation tools like Issue and Pull Request Locking help the team to focus on the
code.
4. Improved Code Writing Pull requests help the organizations to review, develop, and propose
new code. Team members can discuss any implementations and proposals through these before
changing the source code.
5. Increased Code Safety GitHub uses dedicated tools to identify and analyze vulnerabilities to
the code that other tools tend to miss. Development teams everywhere work together to secure
the software supply chain, from start to finish.
6. Easy Code Hosting All the code and documentation are in one place. There are millions of
repositories on GitHub, and each repository has its own tools to help you host and release code.
39
Course Code/Title:CS3V15/Devops Unit: I
When it comes to hosting Git repositories, various platforms provide a robust infrastructure for collaborative
development, version control, and project management. Choosing the right hosting service depends on factors
like ease of use, collaboration features, and integration capabilities. Here are some popular Git hosting
services widely utilized in the software development community:
1. GitHub: GitHub stands out as one of the most prevalent Git hosting platforms, offering a user-
friendly interface, powerful collaboration features, and seamless integration with various tools.
It serves as an ideal choice for open-source projects, private repositories, and team
collaboration.
2. GitLab: GitLab is a comprehensive web-based Git repository manager that not only provides
source code management but also includes features like continuous integration. It caters to both
cloud-based and self-hosted solutions, giving users flexibility in hosting their repositories.
3. Bitbucket: Owned by Atlassian, Bitbucket is another popular Git repository hosting service.
Supporting both Git and Mercurial repositories, it offers features like code collaboration, issue
tracking, and continuous integration. Bitbucket is often preferred by teams using other Atlassian
tools such as Jira and Confluence.
4. GitKraken Glo Boards: GitKraken Glo Boards is an integrated task and issue tracking
service linked with GitKraken, a Git client. This platform allows teams to manage tasks
directly associated with their Git repositories and provides a visual approach to monitoring
project progress.
5. SourceForge: SourceForge, with a long history, hosts open-source software projects and offers
version control, bug tracking, and project management tools. While not as prominent as some
other options, it remains a viable choice for numerous projects.
6. AWS CodeCommit: As part of Amazon Web Services (AWS), AWS CodeCommit is a fully
managed source control service. It seamlessly integrates with other AWS services and provides
a secure and scalable environment for hosting Git repositories.
Selecting the most suitable Git hosting service depends on your team's requirements, project size, and
preferences for cloud-based or self-hosted solutions. Each platform has its strengths, catering to specific use
cases within the software development landscape.
Git and GitHub are related concepts but serve different purposes in the context of version control and
collaborative software development.
Git: Git is a distributed version control system for tracking changes in source code during software
development. It is designed for coordinating work among programmers, but it can be used to track changes
in any set of files. Its goals include speed, data integrity, and support for distributed, non-linear workflows.
GitHub: GitHub is a web-based Git repository hosting service, which offers all of the distributed revision
control and source code management (SCM) functionality of Git as well as adding its own features.
40
Course Code/Title:CS3V15/Devops Unit: I
5 Git is focused on version control and code GitHub is focused on centralized source
sharing. code hosting.
6 Git is a version control system to manage GitHub is a hosting service for Git
source code history. repositories.
41
Course Code/Title:CS2V15/Devops Unit:II
1. Introduction of Maven:
Apache Maven is an automation tool. The tool is written in Java. It was initially released
on 13 July 2004. It is developed by the Apache software foundation. It is part of the
Jakarta Project. It is working on two aspects: how software is built, and its dependencies.
It was created by Jason van Zyl. It is built by using a plugin-based architecture that allows
it to make the use of any application controllable by standard input. It dynamically
downloads Java libraries.
Maven simplifies the above mentioned problems. It does mainly following tasks.
1. It makes a project easy to build
2. It provides uniform build process (maven project can be shared by all the maven
projects)
3. It provides project information (log document, cross referenced sources, mailing
list, dependency list, unit test reports etc.)
4. It is easy to migrate for new features of Maven
1
Course Code/Title:CS3V15/Devops Unit:II
2. Installation of Maven:
To install maven on windows, you need to perform following steps:
1. Download maven and extract it
2. Add JAVA_HOME and MAVEN_HOME in environment variable
3. Add maven path in environment variable
4. Verify Maven
1) Download Maven
Now add MAVEN_HOME in variable name and path of maven in variable value. It
must be the home directory of maven i.e. outer directory of bin. For example:
E:\apache-maven-3.1.1
Click on new tab if path is not set, then set the path of maven. If it is set, edit the path and
append the path of [Link], we have installed JDK and its path is set by default, so
we are going to append the path of maven.
4) Verify maven
To verify whether maven is installed or not, open the command prompt and write:
mvn −version
Now it will display the version of maven and jdk including the maven home and java
home.
2
Course Code/Title:CS3V15/Devops Unit:II
3. POM FILES:
POM is an acronym for Project Object Model. The [Link] file contains information of
project and configuration information for the maven to build the project such as
dependencies, build directory, source directory, test source directory, plugin, goals etc.
Maven reads the [Link] file, then executes the goal.
Before maven 2, it was named as [Link] file. But, since maven 2 (also in maven 3),
it is renamed as [Link].
For creating the simple [Link] file, you need to have following elements:
Element Description
modelVersion It is the sub element of project. It specifies the modelVersion. It should be set to
4.0.0.
groupId It is the sub element of project. It specifies the id for the project group.
artifactId It is the sub element of project. It specifies the id for the artifact (project). An
artifact is something that is either produced or used by a project. Examples of
artifacts produced by Maven for a project include: JARs, source and binary
distributions, and WARs.
version It is the sub element of project. It specifies the version of the artifact under
given group.
Maven [Link] file with additional elements
Here, we are going to add other elements in [Link] file such as:
Element Description
3
Course Code/Title:CS3V15/Devops Unit:II
A maven repository is a directory of packaged JAR file with [Link] file. Maven
searches for dependencies in the repositories. There are 3 types of maven repository:
1. Local Repository
2. Central Repository
3. Remote Repository
4
Course Code/Title:CS3V15/Devops Unit:II
If dependency is not found in these repositories, maven stops processing and throws an error.
Maven local repository is located in your local system. It is created by the maven when
you run any maven command.
Maven central repository is located on the web. It has been created by the apache maven
community itself.
The central repository contains a lot of common libraries that can be viewed by this
url [Link]
Maven remote repository is located on the web. Most of libraries can be missing from the
central repository such as JBoss library etc, so we need to define remote repository in
[Link] file.
Let's see the code to add the jUnit library in [Link] file.
[Link]
<project xmlns="[Link]
xmlns:xsi="[Link]
xsi:schemaLocation="[Link]
[Link]
<modelVersion>4.0.0</modelVersion>
<groupId>[Link].application1</groupId>
<artifactId>my-application1</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
<name>Maven Quick Start Archetype</name>
<url>[Link]
5
Course Code/Title:CS3V15/Devops Unit:II
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.8.2</version>
<scope>test</scope>
</dependency>
</dependencies>
</project>
5. Maven Plugins:
The maven plugins are central part of maven framework, it is used to perform specific
goal.
1. Build Plugins
2. Reporting Plugins
Build Plugins
These plugins are executed at the time of build. These plugins should be declared inside
the <build> element.
Reporting Plugins
These plugins are executed at the time of site generation. These plugins should be
declared inside the <reporting> element.
Plugin Description
6
Course Code/Title:CS3V15/Devops Unit:II
resources copies the resources to the output directory for including in the JAR.
verifier verifies the existence of certain conditions. It is useful for integration tests.
The compiler plugin is used to compile the source code of a Maven project. This
plugin has two goals, which are already bound to specific phases of the default lifecycle:
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.12.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
A Build Lifecycle is a well-defined sequence of phases, which define the order in which
the goals are to be executed. Here phase represents a stage in life cycle. As an example, a
typical Maven Build Lifecycle consists of the following sequence of phases.
7
Course Code/Title:CS3V15/Devops Unit:II
• Validate: This step validates if the project structure is correct. For example – It
checks if all the dependencies have been downloaded and are available in the local
repository.
• Compile: It compiles the source code, converts the .java files to .class, and stores
the classes in the target/classes folder.
• Package: This step packages the compiled code in a distributable format like JAR
or WAR.
• Install: This step installs the packaged code to the local Maven repository.
• Deploy: It copies the packaged code to the remote repository for sharing it with
other developers.
There are always pre and post phases to register goals, which must run prior to, or after
a particular phase.
When Maven starts building a project, it steps through a defined sequence of phases and
executes goals, which are registered with each phase.
• default: This is the main lifecycle, as it’s responsible for project deployment.
• clean: Handles project cleaning, ensuring that all artifacts generated by previous
8
Course Code/Title:CS3V15/Devops Unit:II
• Validate: This step validates if the project structure is correct. For example – It
checks if all the dependencies have been downloaded and are available in the local
repository.
• Compile: It compiles the source code, converts the .java files to .class, and stores
the classes in the target/classes folder.
• Test: It runs unit tests for the project.
• Package: This step packages the compiled code in a distributable format like JAR
or WAR.
• Integration test: It runs the integration tests for the project.
• Verify: This step runs checks to verify that the project is valid and meets the
quality standards.
• Install: This step installs the packaged code to the local Maven repository.
• Deploy: It copies the packaged code to the remote repository for sharing it with
other developers.
Clean Lifecycle (clean)
The first lifecycle in Maven. Running mvn clean invokes the clean lifecycle which
consists of three lifecycle phases:
Maven does more than build software artifacts from project, it can also generate project
documentation and reports about the project, or a collection of projects. Project
documentation and site generation have a dedicated lifecycle which contains four phases:
1. pre-site : execute processes needed prior to the actual project site generation
2. site : generate the project's site documentation
3. post-site : execute processes needed to finalize the site generation, and to
prepare for site deployment
4. site-deploy: deploy the generated site documentation to the specified web server
9
Course Code/Title:CS3V15/Devops Unit:II
7. Maven Profiles:
A Build profile is a set of configuration values, which can be used to set or override
default values of Maven build. Using a build profile, you can customize build for different
environments such as Production v/s Development environments.
Profile Activation
[Link]
1
default configuration used if no profile is mentioned.
10
Course Code/Title:CS3V15/Devops Unit:II
[Link]
2
test configuration when test profile is used.
[Link]
3
production configuration when prod profile is used.
Explicit Profile Activation
In the following example, we will attach maven-antrun-plugin:run goal to test the phase.
This will allow us to echo text messages for different profiles. We will be using [Link]
to define different profiles and will activate profile at command console using maven
command.
xmlns:xsi = "[Link]
xsi:schemaLocation = "[Link]
[Link]
<modelVersion>4.0.0</modelVersion>
<groupId>[Link]</groupId>
<artifactId>project</artifactId>
<version>1.0</version>
<profiles>
<profile>
<id>test</id>
<build>
<plugins>
<plugin>
<groupId>[Link]</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>1.1</version> <executions>
11
Course Code/Title:CS3V15/Devops Unit:II
<execution>
<phase>test</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<tasks>
<echo>Using [Link]</echo>
<copy file="src/main/resources/[Link]"
tofile="${[Link]}/[Link]"/>
</tasks>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
</project>
Now open the command console, go to the folder containing [Link] and execute the
following mvn command. Pass the profile name as argument using -P option.
Maven will start processing and displaying the result of test build profile.
12
Course Code/Title:CS3V15/Devops Unit:II
xmlns:xsi = "[Link]
xsi:schemaLocation="[Link]
[Link]
<mirrors>
<mirror>
<id>[Link]</id>
<url>[Link]
<mirrorOf>*</mirrorOf>
</mirror>
</mirrors>
<activeProfiles>
<activeProfile>test</activeProfile>
</activeProfiles>
</settings>
Now open command console, go to the folder containing [Link] and execute the
following mvn command. Do not pass the profile name using -P option. Maven will
display result of test profile being an active profile.
C:\MVN\project>mvn test
13
Course Code/Title:CS3V15/Devops Unit:II
Now remove active profile from maven [Link] and update the test profile
mentioned in [Link]. Add activation element to profile element as shown below.
The test profile will trigger when the system property "env" is specified with the value
"test". Create an environment variable "env" and set its value as "test".
<profile>
<id>test</id>
<activation>
<property>
<name>env</name>
<value>test</value>
</property>
</activation>
</profile>
Let's open command console, go to the folder containing [Link] and execute the
following mvn command.
C:\MVN\project>mvn test
Activation element to include os detail as shown below. This test profile will trigger
when the system is windows XP.
<profile>
<id>test</id>
<activation>
<os>
<name>Windows XP</name>
<family>Windows</family>
<arch>x86</arch>
14
Course Code/Title:CS3V15/Devops Unit:II
<version>5.1.2600</version>
</os>
</activation>
</profile>
Now open command console, go to the folder containing [Link] and execute the
following mvn commands. Do not pass the profile name using -P option. Maven will
display result of test profile being an active profile.
C:\MVN\project>mvn test
<profile>
<id>test</id>
<activation>
<file>
<missing>target/generated-sources/axistools/wsdl2java/
com/companyname/group</missing>
</file>
</activation>
</profile>
Now open the command console, go to the folder containing [Link] and execute the
following mvn commands. Do not pass the profile name using -P option. Maven will
display result of test profile being an active profile. C:\MVN\project>mvn test
The first step is to open Eclipse, which comes with the integrated Maven environment.
The Eclipse window opens on the screen. Complete the following steps:
15
Course Code/Title:CS3V15/Devops Unit:II
If you want to create a Java project, you can select the “Java Project” option. Since we
are not creating a Java project specifically, we have chosen the “Project” option.
The dialog box that appears on the screen will display different types of projects.
• Click on “Next”
Several Group IDs, Artifact IDs, and Versions will then appear.
16
Course Code/Title:CS3V15/Devops Unit:II
In the next dialog box that appears, you’ll complete the following steps:
“[Link]”
“mavenproject”
• Click on “Finish”
17
Course Code/Title:CS3V15/Devops Unit:II
You can see all the basic information that you have entered on the screen, such as the
Artifact ID, Group ID, etc.
This process takes place by default in Eclipse. There will also be some by default test
cases.
18
Course Code/Title:CS3V15/Devops Unit:II
When it comes to adding more test cases, it will depend on the user, but these test cases
and commands can easily be added in the workspace.
If we try to remove certain dependencies from our file, we will receive error messages.
To troubleshoot this, complete the following steps:
19
Course Code/Title:CS3V15/Devops Unit:II
Return to the previous screen and undo the deletion. The errors that occurred will
disappear.
9. Dependency Management:
Dependency Management :
<dependencyManagement>
<dependencies>
<dependency>
<groupId>[Link]</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.14.0</version>
</dependency>
</dependencies>
</dependencyManagement>
20
Course Code/Title:CS3V15/Devops Unit:II
The above code just declares the new artifact commons-lang3, but it doesn’t really add
it to the project dependency resource list.
Dependencies:
This tag contains a list of dependency tags. Each dependency is supposed to have at
least two main tags, which are groupId and artifactId.
For example:
<dependencies>
<dependency>
<groupId>[Link]</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.14.0</version>
</dependency>
</dependencies>
The version and scope tags can be inherited implicitly if we have used
the dependencyManagement tag before in the POM file. DependencyManagement is just
a declaration, and it does not really add a dependency. Dependencies tag adds the actual
dependency to the project.
<dependencyManagement>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13.2</version>
<scope>test</scope>
</dependency>
</dependencies>
</dependencyManagement>
21
Course Code/Title:CS3V15/Devops Unit:II
The declarative language lies on a top of a general purpose task graph, which can be fully
supported in the build.
Gradle allows you to apply common design principles to your build. It will give you a perfect
structure for build, so that, you can design well-structured and easily maintained,
comprehensible build.
Deep API
By using this API, you can monitor and customize its configuration and execution behavior to
the core.
Gradle scales
Gradle can easily increase the productivity, from simple and single project builds to huge
enterprise multi-project builds.
Multi-project builds
Gradle supports the multi-project builds and partial builds. If you build a subproject,
Gradle takes care of building all the subprojects, that the subproject depends on.
Gradle is fully supported for your ANT tasks, Maven and lvy repository infrastructure for
publishing and retrieving dependencies. It also provides a converter for turning a Maven
[Link] to Gradle script.
22
Course Code/Title:CS3V15/Devops Unit:II
Ease of migration
Gradle can easily adapt to any structure. Therefore, you can always develop your Gradle
build in the same branch, where you can build live script.
Gradle Wrapper
Gradle Wrapper allows you to execute the Gradle builds on machines, where Gradle is
not installed. This is useful for continuous integration of servers.
Groovy
Gradle's build script are written in Groovy programming language. The whole design of
Gradle is oriented towards being used as a language and not as a rigid framework. Groovy
allows you to write your own script with some abstractions. The whole Gradle API is
fully designed in Groovy language.
Installation of Gradle:
Gradle requires JDK version 6 or later to be installed in the system. It uses the JDK
libraries which are installed, and sets to the JAVA_HOME environmental variable.
Gradle carries its own Groovy library, therefore, we need not install Groovy explicitly. If
it is installed, that is ignored by Gradle.
The steps to install Gradle in your system are explained below. Step
First of all, you need to have Java Software Development Kit (SDK) installed on your
system. To verify this, execute Java –version command in any of the platform you are
working on.
In Windows
Execute the following command to verify Java installation. I have installed JDK 1.8 in
my system.
23
Course Code/Title:CS3V15/Devops Unit:II
Output
In Windows
Extract the downloaded zip file named [Link] and copy the distribution files from
Downloads\gradle-2.11\ to C:\gradle\ location.
After that, add the C:\gradle and C:\gradle\bin directories to the GRADLE_HOME and PATH
system variables.
Follow the given instructions − Right Click On My Computers -> Click On Properties ->
Advanced System Settings -> Click On Environmental Variables.
There you will find a dialog box for creating and editing system variables.
Click on new button for creating GRADLE_HOME variable (follow the left side screenshot).
Click on Edit for editing the existing Path system variable (follow the right side screenshot).
24
Course Code/Title:CS3V15/Devops Unit:II
In windows
C:\> gradle –v
Output
Here you will find the Gradle version.
Gradle 2.11
Revision: 584db1c7c90bdd1de1d1c4c51271c665bfcba978
Groovy: 2.4.4
25
Course Code/Title:CS3V15/Devops Unit:II
The Gradle build is a process of creating a Gradle project. When we run a gradle
command, it will look for a file called [Link] in the current directory. This file is
also called the Gradle build script. The build configuration, tasks, and plugins are
described in this file. The build script describes a project and its tasks.
Let's create a small Gradle project, run some of the basic Gradle commands, and
understand how Gradle manages the project.
First, Open the command line and create a directory for the project and change directory
to it.
To generate a Gradle project, run the gradle init command. It will generate a simple
project. With this project, we will explore and understand everything that is generated.
26
Course Code/Title:CS3V15/Devops Unit:II
When we run the gradle init command, it will ask for some basic requirements. First, it
will ask the type of project that we want to create. It will give four options:
1. 1: basic
2. 2: application
3. 3: library
4. 4: Gradle plugin
Select our requirements. Hence, we are just making a demo project so that we will select
the basic option. To select basic option, press 1 and Enter key. Consider the below output:
Next, it will ask for DSL. There are two options that are available for DSL:
1. 1: Groovy
2. 2: Kotlin
Groovy is the default DSL provided by Gradle. Select build script DSL.
Next, it will ask for the project name. Type the project name and press Enter key. It will
take a while to build a project. After the successful execution of the project, we will get a
message BUILD SUCCESSFUL.
Now we have successfully created a Gradle project. Now, what will happen to our
specified directory? Consider the below structure of the Gradle project.
27
Course Code/Title:CS3V15/Devops Unit:II
It is the default structure of a Gradle project. Gradle will generate the following things for
us:
1. The gradle file is build script for configuring the current project.
2. An executable JAR file is used as a Gradle wrapper.
3. Configuration properties for Gradle Wrapper.
4. The gradlew is a Gradle wrapper script for UNIX based OS.
5. The bat is the Gradle Wrapper script for Windows.
6. The settings script for configuring the Gradle build.
Gradle supports APIs for creating and managing tasks through a Groovy-based DSL or
Kotlin-based DSL. Every project contains a collection of tasks for some basic operation.
Gradle supports a library of tasks that configure the project. For example, there is a Copy
task, which copies files from one location to another. The Copy task is one of the most
used tasks In Gradle.
To use the Copy task in build script, follow the below process.
Step2: Add a file called [Link] in the src directory. Add the single line "Hello,
World!" to it, also, we can leave it empty.
28
Course Code/Title:CS3V15/Devops Unit:II
Define a task called Copy in [Link] file. It will copy the src directory to a new
directory called dest. We don't have to create the dest directory; the Copy task will do it
for us.
1. task copy(type: Copy, group: "Custom", description: "The sources are copied to
dest directory") {
2. from "src"
3. into "dest"
4. }
We can provide anything in a group and description. Also, we can omit them, but doing
so will also be omitted from the report of the task used later.
After the successful execution of the task, we will get BUILD SUCCESSFUL message.
The [Link] file is build script of a Gradle project. All the tasks and plugins are
defined in this file.
When we run a gradle command, it looks for a file called [Link] in the current
directory. Although we have called it a build script, strictly, it is a build configuration
script. The build script defines a project and its tasks.
29
Course Code/Title:CS3V15/Devops Unit:II
The [Link] file contains three default sections. They are as follows:
o plugins: In this section, we can apply the java-library plugin to add support for
java library.
o Repositories: In this section, we can declare internal and external repository for
resolving dependencies. We can declare the different types of repository
supported by Gradle like Maven, Ant, and Ivy.
o Dependencies: In this section, we can declare dependencies that are necessary for
a particular subject.
Additionally, we can declare other project-related modules like a task in this file.
Following are some basic commands to display the information of the project:
Listing projects
In Gradle, all the sub-projects of a project in the workspace can be listed in a hierarchy.
To do so, run the below command from the root directory of the project.
1. gradle -q projects
30
Course Code/Title:CS3V15/Devops Unit:II
Listing Tasks
Gradle allows us to list all the essential tasks of the project. To list the task, run the
below command:
1. gradle -q tasks
Output:
C:\Users\HiMaNshU\eclipse-workspace>gradle -q tasks
31
Course Code/Title:CS3V15/Devops Unit:II
Help tasks
project 'eclipse-workspace'.
'. [incubating]
pace'.
project 'eclipse-workspace.'
To see all tasks and more detail, run gradle tasks --all
To see more detail about a task, run gradle help --task <task>
To list all the tasks of the project, run the below command:
32
Course Code/Title:CS3V15/Devops Unit:II
Listing Dependencies
In Gradle, we can list the dependencies which are broken down by the configuration. To
list the dependencies, run the below command:
1. gradle -q dependencies
Difference between Gradle and Maven
33
Course Code/Title:CS3V15/Devops Unit:II
34
Course Code/Title:CS3V15/Devops Unit: III
Install & Configure Jenkins, Jenkins Architecture Overview, creating a Jenkins Job, configuring
a Jenkins job, Introduction to Plugins, Adding Plugins to Jenkins, commonly used plugins (Git
Plugin, Parameter Plugin, HTML Publisher, Copy Artifact and Extended choice parameters).
Configuring Jenkins to work with java, Git and Maven, creating a Jenkins Build and Jenkins
workspace.
Overview
Jenkins is a widely-used open-source automation server that helps automate the parts of software
development related to building, testing, and deploying, facilitating continuous integration and
continuous delivery (CI/CD). Below are detailed steps and considerations for installing and configuring
Jenkins.
Installation
1. System Requirements:
○ Download the latest version of Jenkins from the official Jenkins website.
3. Installation Steps:
○ Windows:
■ Run the .msi installer and follow the installation wizard.
■ After installation, Jenkins will start automatically as a Windows service.
○ Linux:
■ Add Jenkins repository and import the GPG key.
■ Use package manager to install Jenkins (e.g., apt-get install jenkins).
1
Course Code/Title:CS3V15/Devops Unit: III
5. Customize Jenkins:
Configuration
1. Global Configuration:
○ Configure security settings under Manage Jenkins > Configure Global Security.
○ Use the built-in user database or integrate with external authentication mechanisms like
LDAP.
3. Node Configuration:
○ Define your pipeline stages in a Jenkinsfile placed in the root of your project repository.
6. Plugins:
○ Enhance Jenkins functionality by installing plugins from the Manage Jenkins >
Ref: [Link]
Ref: [Link] (click_here)
2
Course Code/Title:CS3V15/Devops Unit: III
Jenkins is a robust automation server widely used for continuous integration and continuous delivery
(CI/CD) processes. Here's a detailed breakdown of the Jenkins architecture and its core components.
1. Jenkins Master:
○Role: Agents are responsible for executing the build jobs assigned by the master.
○Types: Agents can run on different operating systems and hardware configurations.
○Configuration:
■ Static Agents: Permanently assigned to the Jenkins master.
■ Dynamic Agents: Created and destroyed as needed, often used in cloud
environments.
○ Communication: Agents communicate with the master using the Jenkins Remoting
protocol.
3. Job Configuration:
○ Types of Jobs:
■ Freestyle Projects: Basic job type with simple configurations.
■ Pipeline Jobs: Scripted or declarative pipelines that define the entire build
process as code.
■ Multi-Configuration (Matrix) Projects: Allows testing across different
environments and configurations.
○ Components:
■ Source Code Management: Integration with version control systems like Git,
SVN, etc.
■ Build Triggers: Conditions that start the job, such as code commits, scheduled
times, or manual triggers.
■ Build Steps: Actions performed during the build, such as compiling code,
running tests, and packaging artifacts.
■ Post-Build Actions: Steps executed after the build, like deploying artifacts,
sending notifications, or archiving results.
3
Course Code/Title:CS3V15/Devops Unit: III
4. Plugins:
○ Concept: The Jenkins master distributes build tasks to multiple agents, enabling
distributed builds.
○ Benefits:
■ Scalability: Handle more builds simultaneously.
■ Resource Management: Assign builds to agents with the necessary resources.
■ Isolation: Run builds in isolated environments to avoid conflicts.
1. Job Trigger:
○ Job Assignment: The master assigns the job to an available agent based on labels,
availability, and resource requirements.
○ Build Environment Setup: The agent sets up the environment, including checking out
the code, installing dependencies, and configuring the workspace.
○ Execution: The agent runs the build steps as defined in the job configuration.
3. Build Results:
○ Logs: Captures console output and logs from the build process.
○ Artifacts: Stores build artifacts like binaries, packages, and reports.
○ Test Results: Collects and displays test results and code coverage reports.
○ Build History: Maintains a history of all builds, including status, duration, and changes.
4. Notification and Reporting:
○ Notifications: Jenkins can send notifications through various channels (email, chat, etc.)
upon build completion or failure.
○ Dashboards: Provides dashboards for visualizing build status, trends, and metrics.
○ Reports: Generates and displays reports on test results, code quality, and other metrics.
4
Course Code/Title:CS3V15/Devops Unit: III
Summary
Jenkins' architecture is designed to be flexible and scalable. The master-agent model allows for
efficient distribution of build tasks, while plugins provide extensive customization options. By leveraging
Jenkins' comprehensive job configuration and robust notification and reporting capabilities, teams can
streamline their CI/CD pipelines and improve software quality and delivery speed.
Creating a Jenkins job involves setting up a new project within Jenkins to automate various tasks such
as building code, running tests, and deploying applications. Here is a step-by-step guide to creating a
Jenkins job.
Prerequisites
Step-by-Step Guide
○ Open your web browser and navigate to your Jenkins instance URL (e.g.,
[Link]
2. Create a New Job:
○ On the Jenkins dashboard, click on the "New Item" link on the left-hand side menu.
3. Enter Job Name:
○Enter a name for your new job in the "Enter an item name" field.
○Choose the type of job you want to create. For most uses, "Freestyle project" is a good
starting point.
○ Click "OK".
4. Configure the Job:
○ After clicking "OK", you will be directed to the job configuration page.
5. General Configuration:
○ Select Version Control System: Choose the version control system (e.g., Git,
Subversion).
○ Repository URL: Enter the repository URL.
5
Course Code/Title:CS3V15/Devops Unit: III
7. Build Triggers:
○ Configure the build environment settings like setting environment variables, running
scripts before the build, etc.
9. Build Steps:
○ Click on "Add Build Step" and choose the appropriate build step (e.g., "Execute Shell",
"Invoke Ant", "Invoke Gradle script").
○ Example: For executing a shell script, add the necessary shell commands.
10. Post-build Actions:
○ Click on "Add post-build action" and choose the appropriate action (e.g., "Archive the
artifacts", "Publish JUnit test result report").
○ Configure the settings for each post-build action.
11. Save the Configuration:
○ Once you have configured all the necessary settings, click "Save" at the bottom of the
page.
12. Run the Job:
○ On the job's main page, click "Build Now" to run the job immediately.
13. Monitor the Job:
○ Click on the build number in the "Build History" to view the detailed output and logs of
the build process.
○ Check the console output for any errors or warnings.
Practical Example
○ Name: MyMavenProject
○ Git:
6
Course Code/Title:CS3V15/Devops Unit: III
3. Build Triggers:
○ Click "Save".
○ Click "Build Now" on the job's main page.
By following these steps, you can set up a Jenkins job to automate your build process for various types
of projects. Jenkins provides extensive customization options, allowing you to tailor the job to meet
your specific requirements.
1. Install Jenkins
First, make sure Jenkins is installed. You can download it from the official Jenkins website.
Open your web browser and go to [Link] to access the Jenkins dashboard.
○ Enter a name for your job in the "Enter an item name" field.
3. Select Job Type:
7
Course Code/Title:CS3V15/Devops Unit: III
General Settings
1. Project Description:
1. Select SCM:
Build Triggers
Build Environment
Build Steps
8
Course Code/Title:CS3V15/Devops Unit: III
Post-build Actions
○ Click "Save".
2. Build the Job:
1. Pipeline as Code:
○ For complex workflows, use Jenkins Pipelines. Create a Jenkinsfile in your repository.
2. Plugins:
○ Extend Jenkins functionality with plugins from the Jenkins Plugin Manager.
9
Course Code/Title:CS3V15/Devops Unit: III
By following these steps, you can set up a Jenkins job to automate builds, integrate with your source
code management system, and perform various actions based on build results.
Jenkins plugins are extensions that add extra functionality to Jenkins. They allow you to customize and
extend Jenkins to better fit your specific needs. Plugins can provide integrations with other tools, add
new features, and improve existing functionalities.
1. Go to Jenkins Dashboard:
2. Installing Plugins
1. Available Plugins:
○ Click "Install without restart" or "Download now and install after restart".
3. Updating Plugins
10
Course Code/Title:CS3V15/Devops Unit: III
1. Go to Updates:
2. Select Plugins:
1. Installed Plugins:
○ Click the "Uninstall" button next to the plugin you want to remove.
3. Check Plugin Versions:
- Customizing Jobs: With plugins, you can add specific steps, triggers, and post-build actions to your
jobs, making Jenkins adaptable to any workflow.
- Automation: Plugins help automate processes like code quality checks, notifications, and
deployments.
1. Navigate to Jenkins Dashboard: Open your Jenkins dashboard in a web browser (e.g.,
[Link]
11
Course Code/Title:CS3V15/Devops Unit: III
2. Go to Plugin Manager: From the dashboard, click on 'Manage Jenkins' on the left-hand side menu.
On the Manage Jenkins page, click on 'Manage Plugins'. This will take you to the Plugin Manager,
where you can view, install, and update plugins.
2. Search for Plugins: Use the search box to quickly find the plugin you need. For example, if you're
looking to integrate Jenkins with Git, type 'Git' in the search box.
3. Popular Plugins: Jenkins highlights popular plugins at the top of the Available tab. These are
commonly used plugins that most users find essential.
4. Installing Plugins
1. Selecting Plugins to Install: Check the box next to each plugin you want to install. You can select
multiple plugins at once.
2. Install Without Restart: After selecting your plugins, scroll down and click 'Install without restart'.
Jenkins will install the plugins immediately, and you can continue using Jenkins during the installation.
3. Install After Restart: If you prefer, you can choose to 'Download now and install after restart'. This
will install the plugins after Jenkins is restarted, ensuring that all changes are applied without
interference.
4. Installation Progress: The installation process will show a progress bar for each plugin. Once a
plugin is installed, it will move to the 'Installed' tab.
2. Global Tool Configuration: For build tools like Maven or JDKs installed via plugins, you can
configure them under 'Global Tool Configuration'. Here, you define the paths to the tools or specify
installation options.
1. Updating Plugins: Regularly update your plugins to ensure compatibility with the latest version of
Jenkins and to benefit from new features or security patches. Go to the 'Updates' tab in the Plugin
12
Course Code/Title:CS3V15/Devops Unit: III
Manager. Jenkins will show you all plugins with available updates. Click 'Update' to install the latest
versions.
2. Uninstalling Plugins: If a plugin is no longer needed, you can uninstall it from the 'Installed' tab.
Select the plugin and click 'Uninstall'. Uninstallation typically requires a Jenkins restart to complete.
1. Git Plugin: Integrates Jenkins with Git, allowing you to pull code from GitHub, Bitbucket, or other Git
repositories.
2. Pipeline Plugin: Enables the creation and management of Jenkins pipelines, a powerful way to
define complex build, test, and deployment processes.
3. Maven Integration Plugin: Adds support for Maven projects, allowing you to build and manage
Maven-based projects directly within Jenkins.
4. Email Extension Plugin: Provides advanced email notification options for build statuses, including
customizable email templates.
5. Blue Ocean: An alternative user interface for Jenkins that simplifies pipeline creation and offers a
modern, user-friendly design.
8. Troubleshooting Plugin Issues
1. Compatibility Issues: Sometimes, plugins might conflict with each other or with the Jenkins version.
In such cases, refer to the plugin's documentation or Jenkins logs to diagnose the issue.
2. Rollback: If a plugin update causes problems, you can rollback to a previous version from the
'Installed' tab by selecting the desired version.
3. Restart Jenkins: Some plugin changes require a Jenkins restart. Ensure all jobs are completed
before restarting to avoid any interruptions.
Overview: The Git Plugin integrates Jenkins with Git repositories, allowing Jenkins to clone, pull, and
manage Git-based source code repositories.
13
Course Code/Title:CS3V15/Devops Unit: III
Key Features:
● Source Code Management: Configures Jenkins to use Git repositories as the source code for
jobs.
● Polling: Supports polling the Git repository for changes to trigger builds automatically .
● Branch and Tag Support: Allows you to specify branches or tags to build from.
● Credentials: Manages authentication for private repositories using various methods ( e.g., SSH
keys, username/password)
Configuration:
Usage:
● Typically used in continuous integration pipelines to fetch code from Git repositories b efore
running build steps.
14
Course Code/Title:CS3V15/Devops Unit: III
15
Course Code/Title:CS3V15/Devops Unit: III
16
Course Code/Title:CS3V15/Devops Unit: III
17
Course Code/Title:CS3V15/Devops Unit: III
18
Course Code/Title:CS3V15/Devops Unit: III
19
Course Code/Title:CS3V15/Devops Unit: III
2. Parameter Plugin
Overview: The Parameter Plugin allows Jenkins jobs to accept parameters at build time, enabling
dynamic and flexible builds.
Key Features:
● Parameter Types: Supports various parameter types such as string, choice, boolean, and
more.
● Default Values: Provides default values for parameters.
● Prompt for Parameters: Users are prompted to enter values for parameters when starting a
build.
20
Course Code/Title:CS3V15/Devops Unit: III
Configuration:
Usage:
● Useful for creating jobs that require user input or need to be customized for different build
scenarios.
21
Course Code/Title:CS3V15/Devops Unit: III
22
Course Code/Title:CS3V15/Devops Unit: III
3. HTML Publisher
Overview: The HTML Publisher Plugin allows Jenkins to publish HTML reports and artifacts generated
during the build process.
Key Features:
Configuration:
23
Course Code/Title:CS3V15/Devops Unit: III
Usage:
● Ideal for displaying build reports such as test results or code coverage metrics in a user-friendly
HTML format.
DEMO:
24
Course Code/Title:CS3V15/Devops Unit: III
25
Course Code/Title:CS3V15/Devops Unit: III
26
Course Code/Title:CS3V15/Devops Unit: III
27
Course Code/Title:CS3V15/Devops Unit: III
28
Course Code/Title:CS3V15/Devops Unit: III
4. Copy Artifact
Overview: The Copy Artifact Plugin allows Jenkins jobs to copy build artifacts from other jobs,
facilitating artifact reuse across different jobs.
Key Features:
● Artifact Copying: Copies files from one build to another, which can be from a specific build or
the latest successful build.
● Triggering Builds: Can be used in downstream jobs to fetch artifacts from upstream jobs.
Configuration:
Usage:
● Commonly used in multi-job pipelines where artifacts need to be shared between different jobs
or stages.
29
Course Code/Title:CS3V15/Devops Unit: III
30
Course Code/Title:CS3V15/Devops Unit: III
31
Course Code/Title:CS3V15/Devops Unit: III
32
Course Code/Title:CS3V15/Devops Unit: III
33
Course Code/Title:CS3V15/Devops Unit: III
Overview: The Extended Choice Parameter Plugin provides advanced parameter types for Jenkins
jobs, such as multi-select lists, checkboxes, and more complex formats.
Key Features:
● Parameter Types: Includes multi-select, checkboxes, and other advanced input options.
● Dynamic Choices: Allows dynamic generation of choices from scripts or external sources.
Configuration:
34
Course Code/Title:CS3V15/Devops Unit: III
Usage:
● Useful for scenarios where complex user input is needed or where multiple selection options
are required.
Overview: Jenkins requires Java to run. Configuring Java in Jenkins involves specifying the Java
Development Kit (JDK) installations Jenkins should use.
Steps:
○ Check Install automatically to let Jenkins download and install the JDK, or specify
the path to an existing JDK installation.
○ If specifying the path manually, provide the JAVA_HOME directory.
3. Example Configuration:
○ Name: JDK 11
○ JAVA_HOME: /usr/lib/jvm/java-11-openjdk
4. Verify JDK Configuration:
You can verify the JDK configuration by creating a simple Jenkins job and adding a build step that
prints the Java version using:
bash
Copy code
java -version
Overview: The Git Plugin integrates Jenkins with Git repositories, enabling Jenkins to clone, pull, and
manage Git-based source code repositories.
35
Course Code/Title:CS3V15/Devops Unit: III
Steps:
○ Under the Available tab, search for Git Plugin and install it.
2. Configure Git in Jenkins:
○ Navigate to Manage Jenkins > Global Tool Configuration.
○ Select Git.
Overview: The Maven Plugin integrates Jenkins with Apache Maven, allowing Jenkins to use Maven
to build projects.
Steps:
1. Install Maven:
○ Ensure that Apache Maven is installed on your system. You can download it from the
Apache Maven website.
2. Configure Maven in Jenkins:
○ Go to Manage Jenkins > Global Tool Configuration.
○ Check Install automatically to let Jenkins download Maven, or specify the path to an
existing Maven installation.
○
36
Course Code/Title:CS3V15/Devops Unit: III
3. Example Configuration:
○ Name: Maven 3.8.6
○ MAVEN_HOME: /usr/share/maven
4. Configure Maven in a Jenkins Job:
○ Create or edit a Jenkins job and go to Build section.
By following these steps, Jenkins will be properly configured to work with Java, Git, and Maven,
allowing you to set up and manage continuous integration and delivery pipelines effectively. If you
need further customization or run into issues, consulting the documentation for each tool or plugin can
provide additional guidance.
Overview: A Jenkins build is a process where Jenkins executes a series of steps defined in a job
configuration. This typically involves compiling code, running tests, and generating artifacts.
○ Click OK to proceed.
○
37
Course Code/Title:CS3V15/Devops Unit: III
○ Branch: main
● Build Step: Execute shell
○ Command: mvn clean install
38
Course Code/Title:CS3V15/Devops Unit: III
Overview: The Jenkins workspace is a directory where Jenkins stores files and artifacts related to a
particular build. Each job has its own workspace, which is used to perform build operations.
1. Workspace Location:
○ By default, the workspace is located in the Jenkins home directory, typically at
/var/lib/jenkins/workspace/ on Linux systems or C:\Program Files
(x86)\Jenkins\workspace\ on Windows.
2. Workspace Structure:
○ Each job gets its own subdirectory within the workspace. For example, a job named
MyApp-Build will have its workspace at /var/lib/jenkins/workspace/MyApp-
Build/.
○
The workspace contains:
■ Source Code: The code pulled from the repository.
■ Build Artifacts: Files generated during the build process.
■ Logs: Logs related to the build process.
3. Workspace Usage:
○ Building: During the build process, Jenkins checks out the code into the workspace,
executes build steps, and generates artifacts.
○ Archiving Artifacts: After the build, files specified in the post-build actions are archived
from the workspace.
○ Cleaning Up: Jenkins may clean up workspaces based on job configurations or policies
to save disk space.
4. Customizing Workspace:
○ Configure Custom Workspace Location:
■ In the job configuration, under the Advanced Project Options, you can
specify a custom workspace directory.
○ Workspace Cleanup:
■ Use plugins like the Workspace Cleanup Plugin to manage and clean up
workspaces automatically.
This guide should provide a comprehensive understanding of creating a Jenkins build and managing Jenkins
workspaces.
39
Course Code/Title:CS3V15/Devops Unit:IV
Ansible Introduction, Installation, Ansible master/slave configuration, YAML basics, Ansible modules,
Ansible Inventory files, Ansible playbooks, Ansible Roles, adhoc commands in ansible.
Ansible
Ansible is a powerful DevOps tool designed for automating tasks on remote servers or nodes. In simple
terms, it allows you to automate commands and functions on multiple remote machines from a central
‘master’ node. To illustrate its usefulness, consider a scenario where you need to reboot dozens or even
hundreds of remote hosts. You could manually SSH into each one and initiate the reboot, or you can use
Ansible to streamline the process, making it efficient and offering a wide range of additional
functionalities.
In essence, Ansible operates much like the second method mentioned, using SSH (Secure Shell), a
secure communication protocol, to control remote nodes in a secure and optimized manner.
Ansible Components
1. Control Node –
1
Course Code/Title:CS3V15/Devops Unit:IV
2. Manage Node–
3. Inventory –
1. Ad-Hoc Commands –
• These are one-off commands that you can execute using the ansible command.
2. Plugins –
• Examples include connectivity plugins for establishing connections and cache plugins.
3. Module–
• Modules are built-in functions that can be used to perform various tasks.
• They eliminate the need to write custom code for common operations.
• Examples include modules for package management (apt, yum), service management, and
more.
2
Course Code/Title:CS3V15/Devops Unit:IV
4. Playbook –
• Playbooks are a sequence of plays that define the order of tasks executed by Ansible.
• Tasks, in turn, are composed of one or more modules that specify operations.
5. Roles –
• They include templates, playbooks, inventories, error handlers, vars, and meta information.
6. Collections –
7. Galaxy –
Prerequisite:
Before you begin setting up Ansible, it’s essential to ensure you meet the following prerequisites:
• SSH: Make sure SSH is installed on your Linux system. SSH is crucial for secure communication
between the control node and managed nodes.
Installation Steps:
3. SSH Key:
o For secure and direct connections to managed nodes, you’ll need to provide a public
SSH key. You can generate one using the ssh-keygen command.
o Follow the on-screen instructions to create your SSH key pair. This key pair is a crucial
element of Ansible’s secure communications.
3
Course Code/Title:CS3V15/Devops Unit:IV
4. For specific installation instructions tailored to your Linux distribution, click here.
5. Try to connect ssh using once your mange node is set up.
1ssh <user>@<ip>
Note: If you’re using a different operating system, find the appropriate installation instructions here.
Steps:
1. Install OpenSSH-Server:
o To enable remote management of the managed node, you’ll need to install the OpenSSH
server. Use the following command (assuming you’re using a Debian-based Linux
distribution like Ubuntu):
o This command installs the OpenSSH server, allowing secure remote access.
2. Create a User:
o For convenience and consistency, it’s helpful to create a user on the managed node with
the same name as the user on the control node. This makes it easier to manage SSH keys
and ensures a smoother experience.
o To establish secure and direct connections to the managed node, you need to paste the
public SSH key from the control node into the authorized_keys file located in the .ssh
directory of the user’s home folder on the managed node.
1ssh-copy-id <user>@<managed_node_ip>
o This command securely copies your public key to the authorized_keys file, allowing
passwordless SSH authentication.
In the world of Ansible, inventories are like the backbone of your automation infrastructure. They serve
as a vital component that lists and organizes the managed nodes, making automation tasks seamless and
organized. Inventories use a parent-child concept that allows you to create groupings, which come in
handy during specific tasks or scenarios.
• Inventories provide a comprehensive list of managed nodes, making it easy to interact with and
manage them.
4
Course Code/Title:CS3V15/Devops Unit:IV
• Groupings in inventories enable you to organize your infrastructure, allowing you to target
specific subsets of nodes based on your needs. For instance, when you need to perform tasks in
a particular location, groupings become essential.
Creating Inventories
There are two primary methods to create an inventory on the control node: using a YAML file or an INI
file.
1[virtualmachines]
2host1 ansible_host:[Link]
3192.168.0.2
2. In this INI file, it captures the IP addresses of managed nodes. The default username used for
SSH connections is typically the working user of your control node.
3. ansible_host: This parameter is used to specify the IP address, although you can write the IP
directly.
4. [Link]
1---
5 ansible_user: auriga # (optional) Provide the username if it's different from the
control node's user.
o ansible_user: Use this to specify a different user if the managed node’s user is not the
same as the control node’s user.
After creating your inventory, it’s a good practice to verify its correctness. This step ensures that the
inventory is correctly structured and accessible to Ansible. To do this, use the following command:
5
Course Code/Title:CS3V15/Devops Unit:IV
output:
By following these steps, you have effectively set up and verified your inventory, ensuring a solid
foundation for your Ansible automation.
Introduction:
In Ansible, ad-hoc commands are your go-to solution for executing quick, one-off tasks on remote nodes.
They provide a straightforward and efficient way to interact with managed nodes without the need for
creating full-fledged playbooks. Ad-hoc commands are particularly useful when you need immediate
results without the overhead of playbook development.
Ad-hoc commands are best suited for scenarios where the task at hand is simple and doesn’t require the
complexity of a playbook. They are perfect for tasks like system health checks, package installation,
service management, or any other single-operation job.
Ad-hoc commands follow a specific syntax that comprises various components, each serving a unique
role:
• Target Group: This is the group of hosts you intend to target with the ad-hoc command.
• Module (-m): Specifies the module to execute. Modules are Ansible’s building blocks for
performing tasks, and they can range from basic operations like “ping” to more advanced tasks
such as package management.
• Inventory File (-i): Indicates the location of your inventory file, which defines the list of target
devices.
To illustrate the use of ad-hoc commands, let’s consider a simple task: pinging all devices listed in your
inventory. Here’s the command:
virtualmachines is the target group. In this case, it could be any group, or you can use all to target all
devices in your inventory.
6
Course Code/Title:CS3V15/Devops Unit:IV
• -m ping specifies the “ping” module, a basic module that checks the reachability of the
managed nodes.
• The “ping” module sends a test command to the target devices and reports their status,
confirming whether they are responsive.
For a more practical example, let’s use the “apt” module to install Nginx on your managed nodes.
Here’s the command:
• -m apt specifies the “apt” module, which is responsible for package management.
• -a “name=nginx state=present” includes variables for the module to act upon, such as
specifying that Nginx should be installed (state=present).
• –become signifies that the command should run with elevated privileges (sudo).
• –ask-become-pass prompts for the sudo password of the managed node to ensure the
installation proceeds smoothly.
7
Course Code/Title:CS3V15/Devops Unit:IV
By using these ad-hoc commands, you can perform quick, task-specific operations on your managed
nodes, saving time and effort in your automation tasks.
CommonModules:
Ansible offers an extensive library of modules for various tasks, from system administration to
application deployment. You can explore the full list of Ansible modules in the official documentation to
find the most suitable module for your specific requirements.
BestPractices:
When working with ad-hoc commands, consider using SSH key-based authentication for secure,
passwordless access. It streamlines the authentication process and enhances the security of your
automation tasks.
Introduction:
In the world of Ansible, playbooks are the cornerstone of structured and complex automation. While ad-
hoc commands are perfect for quick and isolated tasks, playbooks step in when you need a well-
orchestrated sequence of operations. They allow you to tackle multi-step automation scenarios where
tasks depend on each other, and conditional actions are required.
Let’s delve deeper into the importance of playbooks:
Playbooks serve as the framework for orchestrating automation tasks that involve a sequence of
operations. Whether it’s configuring servers, deploying applications, or managing infrastructure,
playbooks provide a structured way to define how these tasks are executed.
8
Course Code/Title:CS3V15/Devops Unit:IV
Ad-Hoc Limitations:
While ad-hoc commands are ideal for single, immediate tasks, they are not designed for complex, multi-
step automation. Playbooks come to the rescue when tasks have interdependencies and need to be
executed in a coordinated manner.
Playbooks are authored in YAML, a human-readable and straightforward format. This not only makes
them easy to write but also facilitates sharing and collaboration. You can reuse playbooks across various
scenarios, saving time and effort.
Playbooks offer advanced features, including conditional statements and looping, that allow you to adapt
automation to different situations. This flexibility makes playbooks versatile and capable of handling a
wide range of automation needs.
In essence, playbooks are your tool of choice when automation tasks become multi-faceted and require
a structured and logical approach. They provide the power to streamline and automate complex
workflows with precision and efficiency.
To illustrate the use of a playbook, let’s consider a simple task: pinging all devices listed in your
inventory. Here’s the command:
[Link]
1---
4 tasks:
9
Course Code/Title:CS3V15/Devops Unit:IV
As you can see there is gathering facts task which we didn’t create is running it is the default task which
ping the connection.
For a more practical example, let’s use the “apt” module to install Nginx on your managed nodes.
Here’s the command:
[Link]
1- --
3 hosts: all
4 become: yes
5 tasks:
7 apt:
8 name: nginx
9 state: present
10
Course Code/Title:CS3V15/Devops Unit:IV
This command executes the specified playbook while prompting for the necessary privilege escalation
password.
As shown in the example image above, the first command encountered an error due to the absence of
the –ask-become-pass flag. This flag is essential when your task requires elevated privileges, as it
prompts Ansible to request the sudo password for authentication.
The corrected command includes the –ask-become-pass flag, ensuring that the necessary privileges are
obtained before executing the task. This is particularly important when working with tasks that require
administrative access, such as package installations or system configurations.
Note:
In this blog, we primarily focus on the core and fundamental components of Ansible, providing an
introductory overview and understanding of its key concepts. While Ansible offers advanced features
such as “Collections,” “Ansible Galaxy,” “Roles,” ”Dynamic Inventory,” and “Custom Modules and
Plugins,” we recognize that these topics are extensive and may require separate dedicated discussions.
For the purpose of this blog, we aim to establish a strong foundation of Ansible’s core components. If
you’re interested in diving deeper into these advanced topics, we recommend exploring Ansible’s official
documentation and additional resources dedicated to each subject.
11
Course Code/Title:CS3V15/Devops Unit:IV
Understanding YAML
Ansible uses YAML syntax for expressing Ansible playbooks. This chapter provides an
overview of YAML. Ansible uses YAML because it is very easy for humans to understand,
read and write when compared to other data formats like XML and JSON.
Every YAML file optionally starts with “---” and ends with “...”.
In this section, we will learn the different ways in which the YAML data is represented.
key-value pair
YAML uses simple key-value pair to represent the data. The dictionary is represented in key:
value pair.
Abbreviation
Example
Representing List
We can also represent List in YAML. Every element(member) of list should be written in a new
line with same indentation starting with “- “ (- and space).
Example
---
countries:
- America
- China
- Canada
- Iceland
12
Course Code/Title:CS3V15/Devops Unit:IV
Abbreviation
Example
Example
---
james:
name: james john
rollNo: 34
div: B
sex: male
likes:
- maths
- physics
- english
…
List of Dictionaries
Example
---
- james:
name: james john
rollNo: 34
div: B
sex: male
likes:
- maths
- physics
- english
- robert:
name: robert richardson
rollNo: 53
div: B
sex: male
13
Course Code/Title:CS3V15/Devops Unit:IV
likes:
- biology
- chemistry
…
YAML uses “|” to include newlines while showing multiple lines and “>” to suppress newlines
while showing multiple lines. Due to this we can read and edit large lines. In both the cases
intendentation will be ignored.
We can also represent Boolean (True/false) values in YAML. where boolean values can be
case insensitive.
Example
---
- james:
name: james john
rollNo: 34
div: B
sex: male
likes:
- maths
- physics
- english
result: maths:
87 chemistry:
45 biology:
56 physics:
70 english: 80
passed: TRUE
messageIncludeNewLines: |
Congratulation!!
You passed with 79%
messageExcludeNewLines: >
Congratulation!!
You passed with 79%
14
Course Code/Title:CS3V15/Devops Unit:IV
The Ansible ad-hoc command uses the /usr/bin/ansible command-line tool to automate a single
task on one or more managed nodes. The Ad-hoc commands are quick and easy, but they are not
re-usable. The Ad-hoc commands demonstrate the simplicity and power of Ansible.
Syntax
Explanation
Hosts: It can be an entry in the inventory file. For specifying all hosts in the inventory, use all
or "*".
Arguments: We should pass values that are required by the module. It can change according to
the module used.
Username: It specifies the user account in which Ansible can execute commands.
Become: It's an optional parameter specified when we want to run operations that need sudo
privilege. By default, it becomes false.
You can reboot your company server in 12 parallel forks at the same time. For this, you need to
set up the SSHagent for connection.
$ ssh-agent bash
$ ssh-add ~/.ssh/id_rsa
To run reboot for all your company servers in the group, 'abc', in 12 parallel forks:
By default, Ansible will run the above ad-hoc commands from the current user account. If you
want to change then pass the username in ad-hoc command as follows:
2. File Transfer
You can use ad-hoc commands for doing SCP (secure copy protocol) which means lots of files
in parallel on multiple machines or servers.
15
Course Code/Title:CS3V15/Devops Unit:IV
3. Managing Packages
Ad-hoc commands are available for apt and yum module. Here are the following ad-hoc
commands using yum.
Below command checks, if the yum package is installed or not, but not update it.
You can manage, create, and remove a user account on your managed nodes with ad-hoc
commands.
5. Managing Services
16
Course Code/Title:CS3V15/Devops Unit:IV
6. Gathering Facts
Fact represents the discovered variables about a system. You can use the facts to implement
conditional execution of tasks, and also used to get ad-hoc information about your systems. To
see all the facts:
Ansible Playbooks
Playbooks are the files where the Ansible code is written. Playbooks are written in YAML format.
YAML means "Yet Another Markup Language," so there is not much syntax needed.
Playbooks are one of the core features of Ansible and tell Ansible what to execute, and it is used
in complex scenarios. They offer increased flexibility.
Playbooks contain the steps which the user wants to execute on a particular machine. And
playbooks are run sequentially. Playbooks are the building blocks for all the use cases of Ansible.
Through a playbook, you can designate specific roles to some of the hosts and other roles to other
hosts. By doing this, you can orchestrate multiple servers in very different scenarios, all in one
playbook.
Playbook Structure
Each playbook is a collection of one or more plays. Playbooks are structured by using Plays.
There can be more than one play inside a playbook.
17
Course Code/Title:CS3V15/Devops Unit:IV
---
name: install and configure DB
hosts: testServer
become: yes
vars:
oracle_db_port_value : 1521
tasks:
-name: Install the Oracle DB
yum: <code to install the DB>
name
This tag specifies the name of the Ansible playbook. As in what this playbook will be doing.
Any logical name can be given to the playbook.
hosts
This tag specifies the lists of hosts or host group against which we want to run the task. The
hosts field/tag is mandatory. It tells Ansible on which hosts to run the listed tasks. The tasks can
be run on the same machine or on a remote machine. One can run the tasks on multiple
machines and hence hosts tag can have a group of hosts’ entry as well.
vars
Vars tag lets you define the variables which you can use in your playbook. Usage is similar to
variables in any programming language.
tasks
All playbooks should contain tasks or a list of tasks to be executed. Tasks are a list of actions
one needs to perform. A tasks field contains the name of the task. This works as the help text
for the user. It is not mandatory but proves useful in debugging the playbook. Each task
internally links to a piece of code called a module. A module that should be executed, and
arguments that are required for the module you want to execute.
18
Course Code/Title:CS3V15/Devops Unit:IV
Ansible Roles
Roles provide a framework for fully independent or interdependent collections of files, tasks,
templates, variables, and modules.
The role is the primary mechanism for breaking a playbook into multiple files. This simplifies
writing complex playbooks and makes them easier to reuse. The breaking of the playbook allows
you to break the playbook into reusable components.
Each role is limited to a particular functionality or desired output, with all the necessary steps to
provide that result either within the same role itself or in other roles listed as dependencies.
Roles are not playbooks. Roles are small functionality that can be used within the playbooks
independently. Roles have no specific setting for which hosts the role will apply.
Top-level playbooks are the bridge holding the hosts from your inventory file to roles that should
be applied to those hosts.
Creating a Role
The directory structure for roles is essential to creating a new role, such as:
Role Structure
The roles have a structured layout on the file system. You can change the default structured of
the roles as well.
For example, let us stick to the default structure of the roles. Each role is a directory tree in itself.
So the role name is the directory name within the /roles directory.
$ ansible-galaxy -h
Usage
ansible-galaxy[delete|import|info|init|install|list|login|remove|search|setup][--
help] [options] ...
Options
Roles are stored in separate directories and have a particular directory structure
19
Course Code/Title:CS3V15/Devops Unit:IV
Ansible Module
20
Course Code/Title:CS3V15/Devops Unit:IV
21
Course Code/Title:CS3V15/Devops Unit:IV
We have used command module to run the uptime command and we have given both the ad hoc
and the playbook form of execution.
as AD-HOC Command
as Playbook
---
- name: Check the remote host uptime
hosts: testservers
tasks:
- name: Execute the Uptime command over Command module
register: uptimeoutput
command: "uptime"
- debug:
var: uptimeoutput.stdout_lines
22
Course Code/Title:CS3V15/Devops Unit:IV
Example 2: Get the Hostname and Version of remote servers with UNAME
as AD-HOC command
As Playbook
---
- name: Check the remote host Hostname, Version, Distribution with UNAME
hosts: testservers
tasks:
- name: Execute the UNAME command
register: unameout
command: "uname -a"
- debug:
var: unameout.stdout_lines
As Ad Hoc command
As Playbook
---
- name: Check the disk usage of all the file system in the remote servers
hosts: testservers
tasks:
- name: Execute the df command
register: dfout
command: "df -h"
- debug:
var: dfout.stdout_lines
23
Course Code/Title:CS3V15/Devops Unit:IV
So far, In all the sample we have seen we are using one application server and one web server.
Totally two servers.
But in this example, we need to limit our execution only to the web server as we are going to
check the restart the apache web server.
As Ad hoc command
As Playbook
---
- name: restart apache web server
hosts: testservers
tasks:
- name: restartapache
register: httpdresout
become: yes
command: "httpd -k restart"
when: ansible_hostname == "mwiweb02"
- debug:
var: httpdresout.stdout_lines
There are two most useful parameters in ansible command module such as removes and creates
removes - used to tell ansible to Execute the command only if the file exist
creates - used to tell ansible to Execute the specified command only if the file does not exist
Here we are going to do a very simple file creation and removal based on the file
availability or existence. In General using the ansible command module, This
method of creating and removing the file is not recommended as Ansible has a
dedicated module named file to do the same effortlessly.
As Ad hoc command
24
Course Code/Title:CS3V15/Devops Unit:IV
As Playbook
---
- name: "Validate if a file is present or not present using Ansible Command module"
hosts: testservers
tasks:
- name: "Create a file if it does not exist"
command: "touch /tmp/latestfile"
args:
creates: "/tmp/latestfile"
register: createif
- debug: var=[Link]
In the playbook
Example 5: Execute or Run the Script when a file exists or not exists
Now for this example let us take something relatable to real world scenario. Like Start the
Server instance (or) program if the PID file or LOCK file does not exist
Here is the sample playbook for you to get started. I am giving a general playbook here as I do
not want to pick some server/technology of my own which you cannot relate to.
25
Course Code/Title:CS3V15/Devops Unit:IV
you can feel free to modify the script to your needs as this is just a template.
---
- name: Start of Stop Server instance based on PID/LOCK file availability
hosts: appservers
tasks:
- name: Start the instance when the PID file is not present
become: yes
become_user: appuser
command: "[Link]"
args:
creates: "/path/to/pid/[Link]"
register: startinst
26
l O M oA R c P S D | 34 2 3 1 01 0
Create Github Account, Create Repository, Create Azure Organization, Create a new
pipeline, Build a sample code, Modify [Link] file
What is Github?
GitHub is a code hosting platform for collaboration and version control.
GitHub lets you and others work together on projects fromanywhere.
Github is owned by Microsoft, provides access to public(free) and private(paid) repositories.
Stepsto create Github Account:
1
l O M oA R c P S D | 34 2 3 1 01 0
Step 2: Enter your personal details. In addition to creating a username and entering an email
address, you'll also have to create a password. Your password must be at least 15 characters in
length or at least 8 characters with at least one number and lowercase letter.
Step 3: Click Verify to start the verification puzzle. The instructions vary by puzzle,
so just follow the on-screen instructions to confirm that you are a human. A green
checkmark will appear after completing the puzzle.
2
l O M oA R c P S D | 34 2 3 1 01 0
Step 4: Click the green Create account button. It's belowthe form, at the bottom of the
page. This will take you to an email verification page.
3
l O M oA R c P S D | 34 2 3 1 01 0
Step 5: Verify your email by entering the code. After clicking Create account, you’ll
receive an email with a [Link] this code on the verification page. Entering the code will
automatically take you to the welcome page.
Step 6: Select your preferences and click Continue. GitHub displays a quick survey that
can help you tailor your experience to match what you're looking for. You’ll be sent tothe
plan selection page after completing the survey.
4
l O M oA R c P S D | 34 2 3 1 01 0
Step 7:Note the types of plans offered by GitHub. There are a few different plans to choose
from, varying in the amount of features provided.
Step 8: Select the free plan. On the plan selection page, scrolldown to click the button for choosing
a free plan. This will immediately take you to your GitHub dashboard.
5
l O M oA R c P S D | 34 2 3 1 01 0
• Repositories
• Branches
• Commits
• Pull Requests
• Git (the version control software GitHub is built on)
Repository:
6
l O M oA R c P S D | 34 2 3 1 01 0
Step 2: After clicking new repository option, we will have to initialize some things like, naming
our project, choosing the visibility etc. After performing these steps click Create Repository
button.
Step 3: After clicking the button, we will be directed to below page. Right now the only file
we have is a readme file.
7
l O M oA R c P S D | 34 2 3 1 01 0
8
l O M oA R c P S D | 34 2 3 1 01 0
Step 6: Now you will see that all of our files uploaded in our github.
Branch:
Pull Requests :
Git:
9
l O M oA R c P S D | 34 2 3 1 01 0
AZURE:
It was officially launched as Windows Azure in February 2010 and later renamed
Microsoft Azure on March 25, 2014
Prerequisites:
Create an organization:
With your organization, the following aspects are included in the free tier:
Prerequisites:
Account requirements:
What is pipeline
The latest way to build pipelines is with the YAML pipeline editor. You can also use
Classic pipelines with the Classic editor.
3. Do the steps of the wizard by first selecting GitHub as the location of your source
code.
4. You might be redirected to GitHub to sign in. If so, enter your GitHub
credentials.
6. You might be redirected to GitHub to install the Azure Pipelines app. If so,select
Approve & install.
7. Azure Pipelines will analyze your repository and recommend the Maven pipeline
template.
8. When your new pipeline appears, take a look at the YAML to see what it does. When
you're ready, select Save and run.
9. You're prompted to commit a new azure [Link] file to your repository. After
you're happy with the message, select Save and run again.
If you want to watch your pipeline in action, select the build job.
You just created and ran a pipeline that we automatically created for you,because your
code appeared to be a good match for the Maven template.
10. When you're ready to make changes to your pipeline, select it in the Pipelines
page, and then Edit the [Link] file.
You can view and manage your pipelines by choosing Pipelines from the left-hand menu to
go to the pipelines landing page.
From the pipelines landing page you can view pipelines and pipeline runs, create and
import pipelines, manage security, and drill down into pipeline and run details.
Choose Recent to view recently run pipelines (the default view), or choose All to view
all pipelines.
l O M oA R c P S D | 34 2 3 1 01 0
Select a pipeline to manage that pipeline and view the runs. Select the build number for the last
run to view the results of that build, select the branch name toview the branch for that run, or
select the context menu to run the pipeline and perform other management actions.
Select Runs to view all pipeline runs. You canoptionally filter the displayed runs.
l O M oA R c P S D | 34 2 3 1 01 0
You can choose to Retain or Delete a run from the context menu. For more information on run
retention, see Build and release retention policies.
The details page for a pipeline allows you to view and manage that pipeline.
Choose Edit to edit your pipeline. For more information, see YAML pipeline editor. You can
also edit your pipeline by modifying the [Link] file directly in the repository that
hosts the pipeline.
l O M oA R c P S D | 34 2 3 1 01 0
Choose Edit to edit your pipeline. For more information, see YAML pipeline [Link] can also
edit your pipeline by modifying the [Link] file directly in the repository that hosts the
pipeline.
View pipeline run details
From the pipeline run summary you can view the status of your run, both while itis running and
when it is complete.
From the summary pane you can view job and stage details, download artifacts, and navigate to linked
commits, test results, and work Items From the summary pane you can view job and stage details,
download artifacts, and navigate to linked commits, testresults, and work items
steps view, you can review the status and details of eachstep. From the Moreactions you
can toggle timestamps or view a raw log of all steps in the pipeline.
l O M oA R c P S D | 34 2 3 1 01 0
From the More actions menu you can download logs, add tags, edit the pipeline,
delete the run, and configure retention for the run.
l O M oA R c P S D | 34 2 3 1 01 0
1. Go to the list of files and select [Link]. Select the pencil icon to edit.
2. Paste the status badge Markdown at the beginning of the file.
3. Commit the change to the main branch.
4. Notice that the status badge appears in the description of your repository.
To configure anonymous access to badges for private projects:
Azure-Pipeline
• Azure Pipelines provides a YAML pipeline editor that you can use to author
and edit your pipelines.
• You can also edit pipelines by modifying the azure [Link] file directly
in your pipeline's repository using a text editor of your choice
• Choose Edit.
• Make edits to your pipeline using Intellisense and the task assistant forguidance.
l O M oA R c P S D | 34 2 3 1 01 0
• The YAML pipeline editor provides several keyboard shortcuts, which we show in
the following examples.
• Choose Ctrl+Space for Intellisense support while you're editing the YAML
pipeline.
l O M oA R c P S D | 34 2 3 1 01 0
• Navigate to the Pipelines page in Azure Pipelines, select the pipeline you
created, and choose Edit in the context menu of the pipeline to open the
YAMLeditor for the pipeline.
• This pipeline runs whenever your team pushes a change to the main branch of
your repo or creates a pull request. It runs on a Microsoft-hosted Linux machine.
• The pipeline process has a single step, which is to run the Maven task.
YAML:Code
trigger:
- main
strategy:
matrix:
jdk10_linux:
imageName: "ubuntu-latest"
jdkVersion: "1.10"
jdk11_windows:
imageName: "windows-latest"
jdkVersion: "1.11"
maxParallel: 2
pool:
vmImage: $(imageName)
steps:
- task: Maven@4
inputs:
mavenPomFile: "[Link]"
mavenOptions: "-Xmx3072m"
javaHomeOption: "JDKVersion"
jdkVersionOption: $(jdkVersion)
jdkArchitectureOption: "x64"
publishJUnitResults: true
testResultsFiles: "**/TEST-*.xml"goals:
"package"
Change the platform to build
Maven task we used handles testing and publishing results, however, you
can use a task to publish code coverage results too.