<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>CI/CD Archives - World Of Agile</title>
	<atom:link href="https://effectivepmc.net/blog/tag/ci-cd/feed/" rel="self" type="application/rss+xml" />
	<link></link>
	<description></description>
	<lastBuildDate>Tue, 22 Apr 2025 04:19:34 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Deployment Pipeline</title>
		<link>https://effectivepmc.net/blog/deployment-pipeline/</link>
		
		<dc:creator><![CDATA[Snehamayee]]></dc:creator>
		<pubDate>Sat, 31 Mar 2018 18:30:13 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[CI/CD]]></category>
		<category><![CDATA[devops]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=1638</guid>

					<description><![CDATA[<p>Deployment Pipeline A Deployment pipeline is an important concept in Continuous Delivery. In Deployment pipeline we break the build process into distinct stages In each stage we get feedback to move into the next stage. It is a collaborative effort between various groups involved in delivering Agile Software Development. Often the first state in Deployment pipeline is [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/deployment-pipeline/">Deployment Pipeline</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>Deployment Pipeline</h1>
<p>A Deployment pipeline is an important concept in <a href="https://effectivepmc.net/blog/continuous-delivery/">Continuous Delivery</a>. In Deployment pipeline we break the build process into distinct stages In each stage we get feedback to move into the next stage.</p>
<p>It is a collaborative effort between various groups involved in delivering <a href="https://effectivepmc.net/blog/agile/">Agile Software Development</a>.</p>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/03/Deployment-Pipeline.png"><img decoding="async" class="alignnone size-full wp-image-1641" src="https://effectivepmc.net/wp-content/uploads/2018/03/Deployment-Pipeline.png" alt="Deployment Pipeline" width="626" height="479" /></a></p>
<p>Often the first state in Deployment pipeline is compiling the code and converting into binaries. After that, we run the automated tests. Depending on the scenario, there are stages like performance testing, security check, usability testing etc in a Deployment pipeline.</p>
<p>In <a href="https://effectivepmc.net/blog/what-is-devops/">DevOps</a> our aim is to automate all the stages of Deployment pipeline with a smooth running Deployment Pipeline, we can achieve the goal of Continuous Delivery.</p>
<p>At an abstract level, a deployment pipeline is an automated manifestation of your process for getting software from version control into the hands of your users. Every change to your software goes through a complex process on its way to being released. That process involves building the software, followed by its process of these builds through multiple stages of testing and deployment.</p>
<p><strong>Steps in the Deployment Pipeline include</strong></p>
<ul>
<li>The deployment pipeline begins when a developer commits code to a joint versioning system.</li>
<li>Prior to doing this commit, the developer will have performed a series of pre-commit tests on their local environment. The failure of the pre-commit tests of course means that the commit does not take place.</li>
<li>A commit then triggers an integration build of the service being developed. This build is tested by the integration tests.</li>
<li>If these tests are successful, the build is promoted to a quasi-production environment – the staging environment.</li>
<li>Then it is promoted to production under close supervision.</li>
<li>After another period of close supervision, it is promoted to normal production.</li>
<li>The specific tasks may vary a bit for different organization, For example, a small company may not have a staging environment or special supervision for recently deployed version. A larger company may have several different production environments for different purposes.</li>
<li>One way to define continuous integration is to have automatic triggers between one phase and the next, upto integration tests. That is, if the build is successful then integration tests are triggered. If not, the developer responsible for the failure is notified.</li>
<li>Continuous delivery is defined as having automated triggers as far as the staging system.</li>
<li>Once a service is deployed into production it is closely monitored for a period and then it is promoted into normal production.</li>
<li>At this final stage, monitoring and testing still exist but the service is no different from other services in this regard.</li>
<li>The committed code moves through the different steps described above but the code does not move on its own. Rather, it is moved by tools. The tools are controlled by their programs (scripts) or by developer/operator commands.</li>
<li>Traceability is of utmost importance in this movement. Traceability means that, for any system in production, it is possible to determine exactly how it came to be in the production. This means keeping track not only of source code but also of all commands to all the tools that acted on the elements of the system. Individual commands are difficult to trace, therefore, it is better to use scripts instead of commands.</li>
<li>Treating infrastructure-as-code means that the infrastructure-scripts and movement-scripts should be subject to the same quality control as application source code. It should be regulated in same fashion.</li>
</ul>
<h2>A Basic Deployment Pipeline</h2>
<p>The above figure shows a typical deployment pipeline and captures the essence of the approach.</p>
<ul>
<li>The Process starts with the developers committing changes into their version control At this point, the continuous integration management systems responds to the commit by triggering a new instance of our pipeline. The first (commit) stage of the pipeline</li>
</ul>
<p>o   compiles the code,</p>
<p>o   runs unit tests,</p>
<p>o   performs code analysis and</p>
<p>o   creates installers</p>
<ul>
<li>The Second stage is typically composed of longer-running automated acceptance tests. Again your CI server should let you split these tests into suites which can be executed in parallel to increase their speed and give you feedback faster – typically within an hour or two. This stage will be triggered automatically by the successful completion of the first stage in your pipeline.</li>
<li>At this point, the pipeline branches to enable independent deployment of your build to various environments – in this case, UAT, Capacity testing and Production.</li>
</ul>
<h3>Commit Stage</h3>
<p>A new instance of your deployment pipeline is created upon every check-in and, if the first stage passes, results in the creation of a release candidate. The aim of the first stage in the pipeline is to eliminate builds that are unfit for production and signal the team that the application is broken as quickly as possible. We want to expend a minimum of time and effort on a version of the application that is obviously broken. So, when a developer commits a change to the version control system, we want to evaluate the latest version of the application quickly. The developer who checked in then waits for the results before moving on to the next task. There are a few things we want to do as part of the commit stage</p>
<ul>
<li>Compile the code (if necessary)</li>
<li>Run a set of commit tests</li>
<li>Create binaries for use by the later stages</li>
<li>Perform analysis of the code to check its health</li>
<li>Prepare artifacts such as test databases for use by the later stages.</li>
</ul>
<p>If you only implement a commit stage in your development process, it usually represents an enormous step forward in the reliability and quality of the outputs of your teams. However, there are several more stages necessary to complete what we consider to be a minimal deployment pipeline.</p>
<h3>Automated Acceptance Test Gate</h3>
<p>A comprehensive commit test suite is an excellent litmus test for many classes of errors, but there is much that it won’t catch. Unit tests, which comprise the vast majority of the commit tests are so coupled to the low level API that it is often hard for the developers to avoid the trap that the solution works in a particular way rather than asserting that it solves a particular problem.</p>
<p>Commit tests that run against every check-in provides us with timely feedback on problems with the latest builds and on bugs in our application. But, without running acceptance tests in a production-like environments, we know nothing about whether the application meets the customer specifications, nor  whether it can be deployed and survive in the real world. If we want timely feedback on these topics we must extend the range of our continuous integration process to test and rehearse these aspects of our system too.</p>
<p>The goal of the acceptance test stage is to assert that the system delivers the value the customer is expecting and that it meets the acceptance criteria. The acceptance test stage also serves as regression test suite, verifying that no bugs are introduced into existing behavior by the new changes.</p>
<p>The automated acceptance test gate is the second significant milestone in the lifecycle of a release candidate.</p>
<h3>Subsequent Test Stages</h3>
<p>The acceptance test stage is a significant milestone in the lifecycle of a release candidate. Once this stage has been completed, a successful release candidate has moved on from something that is largely the domain of the development team to something of wider interest and use.</p>
<p>For the simplest deployment pipelines, a build that has passed acceptance testing is ready for release to users, at least as far as the automated testing of the system is concerned. If the candidate fails this stage, it by definition is not fit to be released.</p>
<p>The progression of the release candidate to this point has been automatic with successful candidates being automatically promoted to the next stage. If you are delivering software incrementally it is possible to have an automated deployment to production. But for many systems, some form of manual testing is desirable before release, even when you have a comprehensive set of automated tests. Many projects have environments for testing integration with other systems, environments for testing capacity, exploratory testing environments and staging/production-like environments. Each of these environments can be more or less production-like and have their own unique configuration.</p>
<h3>Preparing to Release</h3>
<p>There is a business risk associated with every release of a production system. At best, if there is a serious problem at the point of release, it may delay the introduction of valuable new capabilities. At worst, if there is no sensible back-out plan in place, it may leave the business without mission-critical resources because they had to be decommissioned as part of the release of the new system. The mitigation of these problems is very simple when we view the release step as a natural outcome of our deployment pipeline. Fundamentally we want to</p>
<ul>
<li>Have a release plan that is created and maintained by everybody involved in the delivering of the software including developers and testers as well as operations, infrastructure and support personnel.</li>
<li>Minimize the effect of people making mistakes by automating as much of the process as possible, starting with the most error-prone stages</li>
<li>Rehearse the procedure often in production-like environments so that you can debug the process as possible, starting with the most error-prone stages</li>
<li>Rehearse the procedure often in production-like environments so that you can debug the process and the technology supporting it</li>
<li>Have the ability to back out a release if things don’t go according to plan</li>
<li>Have a strategy for migrating configuration and production data as part of the upgrade and rollback processes.</li>
</ul>
<h2>Implementing a Deployment Pipeline</h2>
<p>Whether you are staring a new project from scratch or trying to create an automated pipeline for an existing system, you should generally take an incremental approach to implementing a deployment pipeline. In general the steps look like this</p>
<ul>
<li>Model your value stream and create a walking skeleton</li>
<li>Automate the build and deployment process</li>
<li>Automate unit tests and code analysis</li>
<li>Automate acceptance tests</li>
<li>Automate releases</li>
</ul>
<h2>Build and Deployment Scripting</h2>
<p>One of the core principles of environment management is that changes to testing and production environments should only be made through an automated process. That means that you should not log into such systems remotely or perform deployments. They should always be entirely scripted.</p>
<p>Some of the principle of Build and Deployment Pipeline Scripting are as follows</p>
<p><strong><u>Create a Script for Each Stage in your Deployment Pipeline</u></strong></p>
<p>Taking this approach ensures that our scripts have a well-defined structure that helps us to keep them clean during maintenance and minimizes dependencies between components of our build and deployment system. Luckily, the deployment pipeline provides an excellent organizing principle for dividing up responsibilities between build scripts. When you first start your project, it makes sense to have a single script containing every operation that will be performed in the course of executing the deployment pipeline, with dummy targets for steps that are not yet automated. However, once your script gets sufficiently long, you can divide it up into separate scripts for each stage in your pipeline. Thus you will have a commit script containing all the targets required to compile your application, package it, run the commit test suite, and perform static analysis of the code. You then need a functional acceptance test script that calls your deployment tool to deploy the application to the appropriate environment, then prepare any data, and finally runs the acceptance tests.</p>
<p><strong><u>Use an Appropriate Technology to Deploy Your Application </u></strong></p>
<p>In a typical deployment pipeline, most stages that follow a successful commit stage, such as the automated acceptance test stage and user acceptance test stage, depend upon the application being deployed to a production-like environment. It is vital that this deployment is automated too. However, you should use the right tool for the job when automating deployment, not a general-purpose scripting language (unless the deployment process is extremely simple).</p>
<p><strong><u>Use the Same Scripts to Deploy to Every Environment </u></strong></p>
<p>It is essential to use the same process to deploy to every environment in which your application runs to ensure that the build and deployment process is tested effectively. That means using the same scripts to deploy to each environment and representing the differences between environments—such as service URIs and IP addresses—as configuration information to be managed separately. Separate out configuration information from the script and store it in version control, providing some mechanism for your deployment script to retrieve it. It is essential that both build and deployment scripts work on developers’ machines as well as on production-like environments, and that they are used to perform all build and deployment activities by developers.</p>
<p><strong><u>Ensure the Deployment Process Is Idempotent </u></strong></p>
<p>Your deployment process should always leave the target environment in the same (correct) state, regardless of the state it finds it in when starting a deployment. The simplest way to achieve this is to start with a known-good baseline environment, provisioned either automatically or through virtualization. This environment should include all the appropriate middleware and anything else your application requires to work. Your deployment process can then fetch the version of the application you specify and deploy it to this environment, using the appropriate deployment tools for your middleware.</p>
<p><strong><u>Evolve Your Deployment System Incrementally </u></strong></p>
<p>Everyone can see the appeal of a fully automated deployment process: “Release your software at the push of a button.” You don’t have to have completed all of the steps to get value from your work. The first time you write a script to deploy the application in a local development environment and share it with the team, you have saved lots of work of individual developers. Start by getting the operations team to work with developers to automate deployment of the application into a testing environment. Make sure that the operations people are comfortable with the tools being used to deploy. Ensure that developers can use the same process to deploy and run the application in their development environments. Then, move on to refining these scripts so they can be used in the acceptance test environment to deploy and run the application so that the tests can be run. Then, move further down the deployment pipeline and ensure the operations team can use the same tools to deploy the application into staging and production.</p>
<p>The post <a href="https://effectivepmc.net/blog/deployment-pipeline/">Deployment Pipeline</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>DevOps Tool &#8211; Cucumber</title>
		<link>https://effectivepmc.net/blog/devops-tool-cucumber/</link>
		
		<dc:creator><![CDATA[Amit Kulkarni]]></dc:creator>
		<pubDate>Mon, 19 Mar 2018 17:31:23 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[CI/CD]]></category>
		<category><![CDATA[DevOps Tools]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=1600</guid>

					<description><![CDATA[<p>DevOps Tool &#8211; Cucumber DevOps Tool &#8211; Cucumber is a tool based on Behavior Driven Development (BDD) framework which is used to write acceptance tests for web application. It allows automation of functional validation in easily readable and understandable format (like plain English) to Business Analysts, Developers, Testers, etc. Cucumber feature files can serve as a good [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/devops-tool-cucumber/">DevOps Tool &#8211; Cucumber</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>DevOps Tool &#8211; Cucumber</h1>
<p><a href="https://effectivepmc.net/wp-content/uploads/2017/08/cucumber.png"><img decoding="async" class="alignnone size-full wp-image-1446" src="https://effectivepmc.net/wp-content/uploads/2017/08/cucumber.png" alt="DevOps Tool Cucumber" width="380" height="133" /></a></p>
<p>DevOps Tool &#8211; Cucumber is a tool based on Behavior Driven Development (BDD) framework which is used to write acceptance tests for web application. It allows automation of functional validation in easily readable and understandable format (like plain English) to Business Analysts, Developers, Testers, etc.</p>
<p>Cucumber feature files can serve as a good document for all. There are many other tools like JBehave which also support BDD framework. Initially Cucumber was implemented in Ruby and then extended to Java framework. Both the tools support native JUnit.</p>
<p>Behavior Driven Development is extension of Test Driven Development and it is used to test the system rather than testing the particular piece of code. We will discuss more about the BDD and style of writing BDD tests.</p>
<p>Cucumber can be used along with Selenium, Watir, and Capybara etc. Cucumber supports many other languages like Perl, PHP, Python, .Net etc. In this tutorial we will concentrate on Cucumber with Java as a language.</p>
<p>Basic components of Cucumber are:</p>
<p><strong>#1) Feature Files:</strong></p>
<p>Feature files are essential part of cucumber which is used to write test automation steps or acceptance tests. This can be used as live document. The steps are the application specification. All the feature files ends with .feature extension.</p>
<p>Sample feature file:</p>
<p>Feature: Login Functionality Feature</p>
<p>In order to ensure Login Functionality works,</p>
<p>I want to run the cucumber test to verify it is working</p>
<p>Scenario: Login Functionality</p>
<p>Given user navigates to website.com</p>
<p>When user logs in using Username as “USER” and Password “PASSWORD”</p>
<p>Then login should be successful</p>
<p>Scenario: Login Functionality</p>
<p>Given user navigates to website.com</p>
<p>When user logs in using Username as “USER1” and Password “PASSWORD1”</p>
<p>Then error message should be thrown</p>
<p><strong>#2) Feature:</strong></p>
<p>This gives information about the high level business functionality (Refer to previous example) and the purpose of Application under test. Everybody should be able to understand the intent of feature file by reading the first Feature step. This part is basically kept brief.</p>
<p><strong>#3) Scenario:</strong></p>
<p>Basically a scenario represents a particular functionality which is under test. By seeing the scenario user should be able to understand the intent behind the scenario and what the test is all about. Each scenario should follow given, when and then format. This language is called as “gherkin”.</p>
<p>Given: As mentioned above, given specifies the pre-conditions. It is basically a known state.</p>
<p>When: This is used when some action is to be performed. As in above example we have seen when the user tries to log in using username and password, it becomes an action.</p>
<p>Then: The expected outcome or result should be placed here. For Instance: verify the login is successful, successful page navigation.</p>
<p>Background: Whenever any step is required to perform in each scenario then those steps needs to be placed in Background. For Instance: If user needs to clear database before each scenario then those steps can be put in background.</p>
<p>And: And is used to combine two or more same type of action.</p>
<p>The post <a href="https://effectivepmc.net/blog/devops-tool-cucumber/">DevOps Tool &#8211; Cucumber</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>DevOps Toolchain</title>
		<link>https://effectivepmc.net/blog/devops-toolchain/</link>
		
		<dc:creator><![CDATA[Amit Kulkarni]]></dc:creator>
		<pubDate>Sun, 18 Mar 2018 18:14:11 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[CI/CD]]></category>
		<category><![CDATA[devops]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=1628</guid>

					<description><![CDATA[<p>DevOps Toolchain DevOps is all about how businesses must be results-oriented to meet evolving customer demand and gain greater market share. Today’s marketplace is connected, always on and increasingly competitive. Companies are adopting what we call an “as-a-service” approach to achieve better outcomes fast, consuming and leveraging leading-edge technologies such as cloud and automation. As [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/devops-toolchain/">DevOps Toolchain</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>DevOps Toolchain</h1>
<p><a href="https://effectivepmc.net/blog/what-is-devops/">DevOps</a> is all about how businesses must be results-oriented to meet evolving customer demand and gain greater market share. Today’s marketplace is connected, always on and increasingly competitive. Companies are adopting what we call an “as-a-service” approach to achieve better outcomes fast, consuming and leveraging leading-edge technologies such as cloud and automation.</p>
<p>As IT systems grow exponentially, and cloud solutions proliferate, non-automated, manual systems increasingly are becoming a major business liability. Today’s systems are simply becoming too big and complex to run completely manually, and working without automation is largely unsustainable for many enterprises across all industries.</p>
<p>Automation involves a set of tools, processes and insights that allows IT environments to self-modify and adjust, and some enterprises have started using intelligent automation to drive a new, more productive relationship between people and machines.</p>
<p>For example, IT automation is often used to auto-scale and load-balance large fleets of servers, manage global content distribution based on geographic demand, enable self-healing of IT systems and manage security mostly with limited ongoing manual intervention.</p>
<p>Moreover, automation enables the ability to adapt and improve the service experience without manual intervention. However, while these tools offer new strengths and capabilities, they are meant to complement and enhance human skills.</p>
<p>Effective automation depends on adequate insights collected from all the systems relevant to the service experience and business outcome you’re trying to augment. Insights from data is necessary to create opens paths to automated predictions and ultimately using machine learning, or artificial intelligence, as part of a full scope the as-a-service construct.</p>
<p>Specific insights known as telemetry allows signals to be harvested and interpreted so automation can better adjust production systems to maintain a healthy business. The insight gathered from such analytics allows automation to validate and compose modification rules. For example, sensors that detect a supply chain issue could automatically reroute or fine-tune related functions, such as dispatch or logistics, to solve or generate a workaround for the issue. The business flow can adapt and realign automatically with the ultimate goal of improving the customer experience.</p>
<p><strong><u>Automation Creates High Resiliency</u></strong></p>
<p>Two common business outcomes that depend on efficient automation are highly resilient systems and experimentation platforms.</p>
<p>Highly resilient systems include automation that can detect, avoid, heal and remediate any deviations from normal, healthy business function. To detect deviations, automation capabilities need to understand what the “steady state” of the system is and what constitutes the “health” of the system under varying conditions. For each detected deviation from an established steady state, a specific automation is triggered that attempts to return the system back to the steady state.</p>
<p>The best way to determine if resiliency automation works effectively is through a process known as “fault injection.” Highly resilient systems run under constant fire drills in which operations insert faults into the system while developers continuously build responding resiliency automation.</p>
<p><strong><u>Automation Creates Higher Degree of Experimentation</u></strong></p>
<p>Automation also can provide a higher degree of experimentation and increase agility, two key attributes of as-a-service economy. Automatically provisioning a component such as a virtual machine, for example, is only a piece of the puzzle since automation is most valuable when it contributes to improving a customer experience or delivering a business outcome.</p>
<p>A platform that’s constantly testing, experimenting and developing allows companies to try new ideas in production quickly without fear of failure or outage. When confidence in system resiliency is high, it allows businesses to test new things directly in production (A/B testing). If an experiment fails, there is no harm done as automation returns the system to steady state. If an experiment succeeds, it is quickly absorbed into the production itself.</p>
<p>A fast, efficient experimentation platform enables businesses to react faster to failures and successes—and pivot accordingly without excess wasted resources. For example, a retail company might change a shopping basket feature for 1 percent of its customers. With constant measurement and instrumentation, the company can automatically derive insights, determine if the change is effective and create a chain of automated reactions. If, say, the demand spikes for a new offering based on a limited customer pilot, the system can reset stocking levels ahead of geographic or further customer segment rollout. This ability increases a company’s agility and adaptability, improving the customer experience and delivering on the most important factors determining success in today’s as-a-service business environment.</p>
<h2>Use Of Tools to Facilitate Devops</h2>
<p>Tools are inherent to our jobs, inherent to how we solve the problems we face each day. Our comfort level with the set of tools that are available to us, and our ability to adapt to new tools as they evolve and shape our thoughts and ideas. The availability of collective knowledge within the palm of your hand combined with the collaboration across organization and company boundaries through open source software is dramatically disrupting the status quo of work. Companies mired in managing infrastructure configuration management by hand with unknown numbers of divergent systems, unable to quickly change and respond to market demands will struggle against their counterparts who have managed to contain their complexity on one axis through infrastructure automation. While it is possible to manage servers by hand, or even artisinally crafted shell scripts, a proper configuration management tool is</p>
<p>invaluable especially as your environment and team changes.</p>
<p>Even the best software developers will struggle if they are working in an environment without a version control system in place. Tools matter in that not having them, or using them incorrectly, can destroy the effectiveness of even the most intelligent and empathetic of engineers. The consideration you give to the tools you use in your organization will reflect in the overall organization’s success. You’ll find that what is a good tool for some teams might not be a good one for others. The strength of tools comes from how well they fit the needs of the the people or groups using them. If you don’t need feature X, its presence won’t be a selling point when considering which tool your organization should use. Especially in larger organizations with teams numbering in the dozens, finding one tool that meets the needs of every team will be increasingly difficult. You will have to strike a balance between deciding on one tool that will be used across the entire company consistently and allowing more freedom of choice among individual teams. There are benefits to both the consistency and manageability that comes from having only one tool in use in an organization, and also from allowing teams to pick specific tools that work best for then.</p>
<p>Because DevOps is a cultural shift and collaboration (between development, operations and testing), there is no single &#8220;DevOps tool&#8221;: it is rather a set (or &#8220;DevOps toolchain&#8221;), consisting of multiple tools in the Delivery and Deployment pipelines. Generally, DevOps tools fit into one or more of these categories, which is reflective of the software development and delivery process:</p>
<ul>
<li>Plan – Plan testing strategy, CI/CI Strategy, Choice of Tools etc</li>
<li>Code — Code development and review, version controltools, code merging;</li>
<li>Build — Continuous integrationtools, build status;</li>
<li>Test — Test and results determine performance;</li>
<li>Release — Change management, release approvals, release automation;</li>
<li>Deploy — Infrastructure configuration and management, Infrastructure–as–Code tools;</li>
<li>Operate and Monitor — Applications performance monitoring, end–user experience.</li>
</ul>
<p>Though there are many tools available, certain categories of them are essential in the DevOps toolchain setup for use in an organization.</p>
<p>Tools such as Docker (containerization), Jenkins (continuous integration), Puppet (Infrastructure-as-Code) and Vagrant (virtualization platform)—among many others—are often used and frequently referenced in DevOps tooling discussions.</p>
<p>Typical stages in a DevOps toolchain looks like this</p>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/03/DevOpsToolchain.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1630" src="https://effectivepmc.net/wp-content/uploads/2018/03/DevOpsToolchain.png" alt="DevOps Toolchain" width="628" height="472" /></a></p>

<p><strong>Please refer to the following links to know more about DevOps Tools</strong></p>



<ul class="wp-block-list">
<li><a href="https://effectivepmc.net/blog/jenkins-tool/">DevOps Tool – Jenkins</a></li>
<li><a href="https://effectivepmc.net/blog/devops-tool-cucumber/">DevOps Tool – Cucumber</a></li>
</ul>
<p>The post <a href="https://effectivepmc.net/blog/devops-toolchain/">DevOps Toolchain</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Configuration Management</title>
		<link>https://effectivepmc.net/blog/configuration-management/</link>
		
		<dc:creator><![CDATA[Amit Kulkarni]]></dc:creator>
		<pubDate>Sun, 18 Feb 2018 05:42:00 +0000</pubDate>
				<category><![CDATA[Agile Testing]]></category>
		<category><![CDATA[CI/CD]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=1688</guid>

					<description><![CDATA[<p>Configuration Management Agile projects often involve heavy use of automated tools to develop, test, and manage software development. Developers use tools for static analysis, unit testing, and code coverage. Developers continuously check the code and unit tests into a configuration management system, using automated build and test frameworks. These frameworks allow the continuous integration of [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/configuration-management/">Configuration Management</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>Configuration Management</h1>
<p>Agile projects often involve heavy use of automated tools to develop, test, and manage software development. Developers use tools for static analysis, unit testing, and code coverage. Developers continuously check the code and unit tests into a configuration management system, using automated build and test frameworks. These frameworks allow the continuous integration of new software with the system, with the static analysis and unit tests run repeatedly as new software is checked in.</p>
<p>The tools usage in Agile projects must supplement people interaction but should not replace it. In <a href="https://effectivepmc.net/blog/what-is-agile/">Agile</a> software development projects, there are many tools that are being used in various situations. Few examples of tools are: Developers use tools like SONAR for static analysis and code coverage, JUnit for code coverage. Developers also use configuration management tools like SVN, where they can check in source code, unit tests. The configuration management tool will compile and build the code with test frameworks.</p>
<p>Examples of few tools that are commonly used with build automation are:</p>
<ul>
<li>Make</li>
<li>Ant</li>
<li>Maven</li>
<li>Gradle</li>
<li>Jenkins</li>
<li>JUnit</li>
<li>Subversion</li>
<li>GIT</li>
</ul>
<p>These common toolset and build scripts allows for customization when needed, giving their customers flexibility to configure the tools to their individual needs. They support centralized controls and reporting, decentralized execution (Identity-based security). They can be easily deployed on the existing infrastructure where possible.</p>
<p>These automated tests can also include functional tests at the integration and system levels. Such functional automated tests may be created using functional testing harnesses, open-source user interface functional test tools, or commercial tools, and can be integrated with the automated tests run as part of the continuous integration framework. In some cases, due to the duration of the functional tests, the functional tests are separated from the unit tests and run less frequently. For example, unit tests may be run each time new software is checked in, while the longer functional tests are run only every few days.</p>
<p>One goal of the automated tests is to confirm that the build is functioning and installable. If any automated test fails, the team should fix the underlying defect in time for the next code check-in. This requires an investment in real-time test reporting to provide good visibility into test results. This approach helps reduce expensive and inefficient cycles of “build-install-fail-rebuild-reinstall” that can occur in many traditional projects, since changes that break the build or cause software to fail to install are detected quickly.</p>
<p>The goal of automated tests may also be to check the build function and its stabilization. Any build failure immediately warrants an action from the developers and testers. This can quickly help other developers not to get delayed due to build failure.</p>
<p>There are 3 kinds of builds, namely Fast builds, Full build, and Push-To-QA build.</p>
<ul>
<li>Fast build is triggered when it detects the source code changes, and it also detects complication, unit testing or packaging errors. It also performs the static code analysis and provides a web based build report on the code coverage. This build can be customized as per the team</li>
<li>Full Build is scheduled usually twice a day, it is same as fast build, and it deploys into DEV server</li>
<li>Push-To-QA build is scheduled On-demand and it deploy the latest DEV build to QA</li>
</ul>
<p>Automated testing and build tools help to manage the regression risk associated with the frequent change that often occurs in Agile projects. However, over-reliance on automated unit testing alone to manage these risks can be a problem, as unit testing often has limited defect detection effectiveness. Automated tests at the integration and system levels are also required. There are frequent changes to software/application in Agile development, effective use of build tools and automated testing helps in managing and minimizing risks related to regression issues.</p>
<p>The post <a href="https://effectivepmc.net/blog/configuration-management/">Configuration Management</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>DevOps Tool &#8211; Jenkins</title>
		<link>https://effectivepmc.net/blog/jenkins-tool/</link>
		
		<dc:creator><![CDATA[Archana Shinde]]></dc:creator>
		<pubDate>Sat, 10 Feb 2018 17:43:11 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[CI/CD]]></category>
		<category><![CDATA[DevOps Tools]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=1602</guid>

					<description><![CDATA[<p>DevOps Tool &#8211; Jenkins Jenkins Tool is a software that allows continuous integration. Jenkins tool will be installed on a server where the central build will take place. The following flowchart demonstrates a very simple workflow of how Jenkins works. Along with Jenkins tool, sometimes, one might also see the association of Hudson. Hudson is a very [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/jenkins-tool/">DevOps Tool &#8211; Jenkins</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>DevOps Tool &#8211; Jenkins</h1>
<p><a href="https://effectivepmc.net/wp-content/uploads/2017/08/jenkins.png"><img loading="lazy" decoding="async" class="alignnone wp-image-1457 size-full" src="https://effectivepmc.net/wp-content/uploads/2017/08/jenkins.png" alt="Jenkins Tool" width="396" height="127" /></a></p>
<p>Jenkins Tool is a software that allows <b>continuous integration</b>. Jenkins tool will be installed on a server where the central build will take place. The following flowchart demonstrates a very simple workflow of how Jenkins works.</p>
<p style="text-align: center;"><a href="https://effectivepmc.net/wp-content/uploads/2018/03/why_jenkins.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1603" src="https://effectivepmc.net/wp-content/uploads/2018/03/why_jenkins.png" alt="Why Jenkins" width="170" height="336" /></a></p>
<p>Along with Jenkins tool, sometimes, one might also see the association of <b>Hudson</b>. Hudson is a very popular open-source Java-based continuous integration tool developed by Sun Microsystems which was later acquired by Oracle. After the acquisition of Sun by Oracle, a fork was created from the Hudson source code, which brought about the introduction of Jenkins.</p>
<h2>What is Continuous Integration?</h2>
<p>Continuous Integration is a development practice that requires developers to integrate code into a shared repository at regular intervals. This concept was meant to remove the problem of finding later occurrence of issues in the build lifecycle. Continuous integration requires the developers to have frequent builds. The common practice is that whenever a code commit occurs, a build should be triggered.</p>
<h2>Install Jenkins Tool</h2>
<p>Download only the war file and copy the war file to your TOMCAT/webapp folder. It will automatically get installed. Now your Jenkins is available to you at localhost:8080/jenkin</p>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/03/unlock-jenkins.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1604" src="https://effectivepmc.net/wp-content/uploads/2018/03/unlock-jenkins.png" alt="unlock jenkins" width="606" height="380" /></a></p>
<h3>Install the recommended pluggins</h3>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/03/install-suggested-pluggins-jenkins.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1605" src="https://effectivepmc.net/wp-content/uploads/2018/03/install-suggested-pluggins-jenkins.png" alt="Install suggested pluggin - jenkins" width="606" height="380" /></a></p>
<h3>Create First Admin User in Jenkins Tool</h3>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/03/Create-Admin-User.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1606" src="https://effectivepmc.net/wp-content/uploads/2018/03/Create-Admin-User.png" alt="Create First Admin User on Jenkins" width="606" height="380" /></a></p>
<p>Now Configure Jenkins tool using the Manage Jenkins menu</p>
<p>Once Jenkins is up and running, one can access Jenkins from the link − <b>http://localhost:8080</b></p>
<p>This link will bring up the Jenkins dashboard.</p>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/03/Configure-Jenkins.png"><img loading="lazy" decoding="async" class="alignnone wp-image-1609" src="https://effectivepmc.net/wp-content/uploads/2018/03/Configure-Jenkins.png" alt="Configure Jenkins" width="537" height="337" /></a></p>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/03/dependency-errors-1.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1611" src="https://effectivepmc.net/wp-content/uploads/2018/03/dependency-errors-1.png" alt="Jenkins dependency errors" width="606" height="380" /></a></p>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/03/Configure-Jenkins-2.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1612" src="https://effectivepmc.net/wp-content/uploads/2018/03/Configure-Jenkins-2.png" alt="Configure Jenkins" width="606" height="380" /></a></p>
<h3>Check Installed Pluggins</h3>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/03/check-installed-pluggins.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1614" src="https://effectivepmc.net/wp-content/uploads/2018/03/check-installed-pluggins.png" alt="Installed Pluggins" width="606" height="380" /></a></p>
<h2>Jenkins and Tomcat Setup</h2>
<p>Copy the Jenkis.war file which was downloaded from the previous section and copy it to the webapps folder in the tomcat folder.</p>
<p>Now open the command prompt. From the command prompt, browse to the directory where the tomcat7 folder is location. Browse to the bin directory in this folder and run the start.bat file</p>
<pre class="result notranslate">E:Appstomcat7bin&gt;startup.bat
</pre>
<p>Once the processing is complete without major errors, the following line will come in the output of the command prompt.</p>
<pre class="result notranslate">INFO: Server startup in 1302 ms
</pre>
<p>Open the browser and go to the link − <b>http://localhost:8080/jenkins</b>. Jenkins will be up and running on tomcat.</p>
<p>The post <a href="https://effectivepmc.net/blog/jenkins-tool/">DevOps Tool &#8211; Jenkins</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Continuous Integration</title>
		<link>https://effectivepmc.net/blog/continuous-integration/</link>
		
		<dc:creator><![CDATA[Amit Kulkarni]]></dc:creator>
		<pubDate>Sat, 27 Jan 2018 18:05:12 +0000</pubDate>
				<category><![CDATA[Agile Scrum]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[CI/CD]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=1622</guid>

					<description><![CDATA[<p>Continuous Integration In Continuous integrations, all the developer work is merged to main branch several times a day. This helps in reducing integration problems. In Continuous integration we try to minimize the duration for which a branch remains checked out. A developer code gets early feedback on the new code added to main repository by using [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/continuous-integration/">Continuous Integration</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>Continuous Integration</h1>
<p>In Continuous integrations, all the developer work is merged to main branch several times a day. This helps in reducing integration problems.</p>
<p>In Continuous integration we try to minimize the duration for which a branch remains checked out. A developer code gets early feedback on the new code added to main repository by using Continuous Integration.</p>
<h2>Implementing Continuous Integration</h2>
<p>The practice of Continuous Integrations (CI) relies on certain pre-requisites being in place.</p>
<h3>Version Control</h3>
<p>Everything in your project must be checked in to a single version control repository which may include</p>
<ul>
<li>Code</li>
<li>Tests</li>
<li>Database Scripts</li>
<li>Builds</li>
<li>Deployment scripts</li>
<li>Anything required to create, install, run and test your application</li>
</ul>
<h3>An Automated Build</h3>
<p>You must be able to start your build from the command line. You can start off with a command-line program that tells you IDE to build your software and then runs your tests, or it can be complex collection of multistage build scripts that call one another. Whatever, the mechanism, it must be possible for either a person or a computer to run your build, test and deployment process in an automated fashion via the command line.</p>
<p>IDEs and CI tools have become pretty sophisticated these days, and you can usually build your software and run tests without going anywhere near to command line. However, you should still have build scripts which can be run from command line without your IDEs. This is since you would need to run your build process in an automated way from your CI environment.</p>
<h3>Agreement of the Team</h3>
<p>CI is a practice, not a tool. It requires a degree of commitment and decipline from your development team. You need everyone to check in small incremental changes frequently to main line and agree that the highest priority task of the project is to fix any changes that break the application. If people don’t adopt the discipline necessary for it to work, your attempts at continuous integration will not lead to the improvement in quality that you hoped for.</p>
<h2>Best Practices of Continuous Integration</h2>
<p>Some of the Best Practices of Continuous Integrations are</p>
<ul>
<li><strong>Build Automation</strong> : In CI, we create such a build environment that even with one command build can be triggered. This automation is done all the way upto the deployment to Production environment.</li>
<li><strong>Main Code Repository</strong>: In CI, we maintain a main branch in code repository that stores all the Production ready code. This is the branch that we can deploy to Production any time.</li>
<li><strong>Self-testing Build : </strong>Every build in CI should be self-tested. It means with every build, there is a set of tests that runs to ensure that changes are of high quality.</li>
<li><strong>Everyday commit to baseline :</strong> Developers will commit all of their changes to baseline everyday. This ensures that there is no big pileup of code waiting for integration with the main repository for a long time.</li>
<li><strong>Build every commit to baseline : </strong>With automated continuous integration , every time a commit is made into baseline, a build is triggered. This helps in confirming that every change integrates correctly.</li>
<li><strong>Fast Build Process :</strong> One of the requirements of CI is to keep the build process fast so that we can quickly identify any problem.</li>
<li><strong>Don’t Check In on a Broken Build :</strong> The cardinal sin of CI is checking in on a broken build. If the build breaks, the developer who broke it has to build it and get it working at the earliest.</li>
<li><strong>Always run all commit tests locally before committing:</strong> Running a commit tests locally is a sanity check before committing to the action. It is also a way to ensure that what we believe to work actually does.</li>
<li><strong>Production like environment testing</strong> : In CI, we maintain a production like environment also known as pre-production on staging environment, which is very close to the Production environment. .We perform testing in this environment to check for any integration issues.</li>
<li><strong>Publish Build Results</strong> : We publish build results on a common site so that everyone can see these and take corrective actions.</li>
<li><strong>Deployment Automation</strong> : The deployment process is automated to the extent that in a build process we can add the steps of deploying the code to a test environment. On this test environment all the stakeholders can access and test the latest delivery.</li>
</ul>
<h2>Benefits of Continuous Integration</h2>
<ul>
<li>CI makes the current build constantly available for testing, demo and release purpose.</li>
<li>With CI, developers write modular code that works well with frequent code check-ins</li>
<li>In case of a unit test failure or bug, the developer can easily revert back to the bug-free state of the code.</li>
<li>There is drastic reduction in the chaos on release day with CI practices</li>
<li>With CI, we can detect Integration issues much earlier in the process</li>
<li>Automated testing is one very useful side effect of implementing CI</li>
<li>All the stakeholders including the business partners can see the small changes deployed into pre-production environment. This provides early feedback on the changes to software.</li>
<li>Automated CI and testing generates metrics which helps in improving the development options.</li>
</ul>
<h2>Continuous Integration and Distributed Teams</h2>
<p>Using continuous integrations with distributed teams is, in terms of process and technology, largely the same as in any other environment. However, the fact that the team is not sitting together in the same room – perhaps are not even working in different time zones does have an impact in some other areas.</p>
<ul>
<li>For Distributed teams within the same time zone, the continuous integration is much the same.</li>
<li>For distributed teams in different time zones, there are more issues to deal with. If the team in USA breaks the build and goes home, this can be a serious handicap for the team in India. The process does not change bt the importance of adhering to it is magnified.</li>
<li>Some of the more powerful continuous integration servers have facilities such as centrally managed build farms and sophisticated authorization schemes that allow you to provide continuous integration as a centralized service to large and distributed teams.</li>
<li>It is also worth considering to move to a distributed version control system such as Git that allows people to check in even when there is no link to the conventionally designated “master” server. In Git, contributions are made first by forming the repository of the project you wish to contribute to, making your changes and then asking the owners of the original repository to pull your changes if they are working all-right.</li>
<li>Having a local version control system can also be an option in worst of circumstances. However it should be avoided as much as possible.</li>
</ul>
<p><a href="https://effectivepmc.net/blog/devops-tool-jenkins/">For more information on installation of Jenkins click here</a></p>
<p>The post <a href="https://effectivepmc.net/blog/continuous-integration/">Continuous Integration</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
