<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>devops Archives - World Of Agile</title>
	<atom:link href="https://effectivepmc.net/blog/tag/devops/feed/" rel="self" type="application/rss+xml" />
	<link></link>
	<description></description>
	<lastBuildDate>Tue, 22 Apr 2025 04:16:26 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>ITSM &#8211; ITIL : How Similar and How Different?</title>
		<link>https://effectivepmc.net/blog/itsm-itil-how-similar-and-how-different/</link>
		
		<dc:creator><![CDATA[Archana Shinde]]></dc:creator>
		<pubDate>Fri, 29 Oct 2021 19:37:39 +0000</pubDate>
				<category><![CDATA[ITIL]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[Itil]]></category>
		<category><![CDATA[ITSM]]></category>
		<category><![CDATA[Six Sigma]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=9249</guid>

					<description><![CDATA[<p>ITSM &#8211; ITIL : How Similar and How Different? ITSM ITIL ITSM stands for IT service management.  ITIL meaning IT infrastructure library It is the process involved in managing IT services provided to customers (like planning, delivering, operating and controlling ) It is a structure consisting of the best practices of ITSM. ITSM covers frameworks like [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/itsm-itil-how-similar-and-how-different/">ITSM &#8211; ITIL : How Similar and How Different?</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-table">
<h1>ITSM &#8211; ITIL : How Similar and How Different?</h1>
<table>
<tbody>
<tr>
<td><strong>ITSM</strong></td>
<td><strong>ITIL</strong></td>
</tr>
<tr>
<td>ITSM stands for IT service management. </td>
<td><a href="https://effectivepmc.net/what-is-itil/">ITIL</a> meaning IT infrastructure library</td>
</tr>
<tr>
<td>It is the process involved in managing IT services provided to customers (like planning, delivering, operating and controlling )</td>
<td>It is a structure consisting of the best practices of ITSM.</td>
</tr>
<tr>
<td>ITSM covers frameworks like <a href="https://effectivepmc.net/blog/what-is-six-sigma/">Six Sigma</a>, COBIT, <a href="https://effectivepmc.net/blog/what-is-devops/">DevOps</a>, ITIL</td>
<td>To implement ITSM, we use ITIL</td>
</tr>
<tr>
<td>ITSM is focused on planning and Managing changes to changes to keep business profitable</td>
<td>ITIL is more customer focused, with providing services to customers.</td>
</tr>
<tr>
<td>ITSM is at overall organizational implementation level</td>
<td>ITIL is a collection of process that guide the delivery and support of IT services at project or delivery level</td>
</tr>
<tr>
<td>ITSM describes the &#8220;what&#8221;</td>
<td>ITIL describes the &#8220;how.&#8221;</td>
</tr>
</tbody>
</table>
</figure>
<p>The post <a href="https://effectivepmc.net/blog/itsm-itil-how-similar-and-how-different/">ITSM &#8211; ITIL : How Similar and How Different?</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Connection Between Lean, Agile, DevOps, Six-Sigma, ITSM, Scrum</title>
		<link>https://effectivepmc.net/blog/connection-between-lean-agile-devops-six-sigma-itsm-scrum/</link>
		
		<dc:creator><![CDATA[Amit Kulkarni]]></dc:creator>
		<pubDate>Thu, 14 Oct 2021 07:23:20 +0000</pubDate>
				<category><![CDATA[Agile Scrum]]></category>
		<category><![CDATA[Agile]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[ITSM]]></category>
		<category><![CDATA[Lean]]></category>
		<category><![CDATA[Scrum]]></category>
		<category><![CDATA[Six-Sigma]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=9238</guid>

					<description><![CDATA[<p>Connection Between Lean, Agile, DevOps, Six-Sigma, ITSM, Scrum Let’s understand the connection between various philosophies, methods and Scrum framework. First a definition of what the terms Philosophy, Framework, Methodology and Standards mean Philosophy: Broad values and principles which people follow. Philosophy is more of a mindset than telling you actually how to do things. Framework: [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/connection-between-lean-agile-devops-six-sigma-itsm-scrum/">Connection Between Lean, Agile, DevOps, Six-Sigma, ITSM, Scrum</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h1>Connection Between Lean, Agile, DevOps, Six-Sigma, ITSM, Scrum</h1>
<p>Let’s understand the connection between various philosophies, methods and Scrum framework.</p>



<p>First a definition of what the terms Philosophy, Framework, Methodology and Standards mean</p>



<ul class="wp-block-list">
<li><strong>Philosophy</strong>: Broad values and principles which people follow. Philosophy is more of a mindset than telling you actually how to do things.</li>
<li><strong>Framework</strong>: Framework is a bare minimum essential guideline under which you operate and solve specific types of problems. E.g. <a href="https://effectivepmc.net/blog/scrum/">Scrum</a> and <a href="https://effectivepmc.net/blog/what-is-kanban/">Kanban</a> are frameworks which give broad guidelines.</li>
<li><strong>Methodology:</strong> Methodology gives guidelines about tools, techniques and sometimes includes a framework to operate too. It is not necessary that an approach of describing everything is always great. Most times, methodologies are over-defined and a lot becomes irrelevant for use. Some of the methodologies are <a href="https://effectivepmc.net/blog/what-is-prince2/">PRINCE2</a> by <a href="https://www.axelos.com/">Axelos</a>, Extreme Programming (XP)</li>
<li><strong>Standards: </strong>Standards collect best-practices, tools, techniques, processes which are generally used in the industry. It&#8217;s basically a collection of various things put together. For e.g. PMBOK by PMI is a standard of project management.</li>
</ul>



<p><strong>Lean</strong></p>



<p><a href="https://effectivepmc.net/blog/what-is-lean/">Lean</a> is an effective way of eliminating waste and implementing Customer Value focus. Lean is the over-arching philosophy consisting of the 5 broad principles as described as: </p>



<figure class="wp-block-table">
<table>
<tbody>
<tr>
<td><strong>Lean Principles</strong></td>
<td><strong>Description</strong></td>
</tr>
<tr>
<td>Value</td>
<td>Define what is of value to the customer</td>
</tr>
<tr>
<td>Value stream</td>
<td>Identify the value stream/eliminate waste.</td>
</tr>
<tr>
<td>Flow</td>
<td>Create a constant flow</td>
</tr>
<tr>
<td>Pull</td>
<td>Produce based on demand</td>
</tr>
<tr>
<td>Perfection</td>
<td>Continues improvement</td>
</tr>
</tbody>
</table>
</figure>



<p>Within the umbrella of lean, there are various other philosophies are prevalent in the industry – DevOps and Agile being the most widely used philosophy</p>



<p><strong>Agile</strong></p>



<p><a href="https://effectivepmc.net/blog/what-is-agile/">Agile</a> is a philosophy and consists of thought processes which are summed up in the Agile Manifesto. Agile Manifesto consists of 4 values and 12 principles. The 4 Values are</p>



<ul class="wp-block-list">
<li>Individuals and Interactions <strong>over</strong> Processes and Tools</li>
<li>Working Products <strong>over</strong> Comprehensive Documentation</li>
<li>Customer Collaboration <strong>over</strong> Contract Negotiation</li>
<li>Responding to Change <strong>over</strong> Following a Plan</li>
</ul>



<p>Some of the key thought processes on which Agile principles are based on are</p>



<ul class="wp-block-list">
<li>Focus on Value</li>
<li>Elimination of wastage and improvement of productivity</li>
<li>Respecting Change</li>
<li>Iterative and Incremental Delivery</li>
<li>Collaborating teams</li>
<li>Working with variable scope</li>
<li>Focus on working product</li>
<li>Time boxed delivery</li>
<li>Focus on continuous improvements</li>
</ul>



<p>Most Agile values and Principles are directly complying to the broader Lean philosophy.</p>



<p><strong>Scrum</strong></p>



<p><a href="https://effectivepmc.net/blog/scrum/">Scrum</a> is a framework to handle complexity. Scrum complies to all of the agile principles and values in some way or the other. The framework consists of the 3 Artifacts, 5 Events and the Three Roles which are the minimum in handling the complexity in work effectively.</p>



<p><strong>DevOps</strong></p>



<p><a href="https://effectivepmc.net/blog/what-is-devops/">DevOps</a> is a philosophy. DevOps isn’t a framework or methodology in and of itself. It doesn’t stand alone. DevOps adopts and leverages multiple frameworks and methodologies such as agile, lean and ITSM. DevOps is benefited tremendously from the work the Agile community has done, showing how small teams, operating with high-trust, small batch sizes with smaller, more frequent software releases, can dramatically increase the productivity of development organizations.</p>



<p><strong>ITSM</strong></p>



<p>IT service management (ITSM) is the activities that are performed by an organization to design, plan, deliver, operate and control information technology (IT) services offered to customers – commonly called Support/Maintenance/Production Support services. Various Six Sigma tools, XP techniques, software development techniques, <a href="https://effectivepmc.net/blog/devops-toolchain/">DevOps Tools</a> are used.</p>



<p><strong>Six Sigma</strong></p>



<p><a href="https://effectivepmc.net/blog/what-is-six-sigma/">Six Sigma</a> is a set of statistical quality improvement techniques which can be used to achieve Agility or become Lean. Without quality, the product may not deliver value to the customer. Hence the Six Sigma techniques are used frequently while executing work in various frameworks such as Scrum, Kanban, XP.</p>
<p>The post <a href="https://effectivepmc.net/blog/connection-between-lean-agile-devops-six-sigma-itsm-scrum/">Connection Between Lean, Agile, DevOps, Six-Sigma, ITSM, Scrum</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>DevOps End of Service / Product Life</title>
		<link>https://effectivepmc.net/blog/devops-end-of-service-product-life/</link>
		
		<dc:creator><![CDATA[Archana Shinde]]></dc:creator>
		<pubDate>Tue, 15 Dec 2020 14:44:01 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Continuous Delivery]]></category>
		<category><![CDATA[Continuous Integration]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[DevOps Lifecycle]]></category>
		<category><![CDATA[Product Owner]]></category>
		<category><![CDATA[Scrum Master]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=8540</guid>

					<description><![CDATA[<p>DevOps End of Service / Product Life Agile Teams use DevOps to implement Continuous Integration and Continuous Delivery which helps them to launch their products at a faster rate into the market. The main reason DevOps is adopted are Predictability, Maintainability, Time to market, Quality Improvement and Resilience. The DevOps Lifecycle covers many important life [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/devops-end-of-service-product-life/">DevOps End of Service / Product Life</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h1>DevOps End of Service / Product Life</h1>
<p>Agile Teams use DevOps to implement <a href="https://effectivepmc.net/blog/continuous-integration/">Continuous Integration</a> and <a href="https://effectivepmc.net/blog/continuous-delivery/">Continuous Delivery</a> which helps them to launch their products at a faster rate into the market.</p>



<p>The main reason <a href="https://effectivepmc.net/blog/what-is-devops/">DevOps</a> is adopted are Predictability, Maintainability, Time to market, Quality Improvement and Resilience.</p>



<p>The DevOps Lifecycle covers many important life cycle phases like Development, Testing, Integration, Deployment &amp; Monitoring. </p>



<p>But many times misses out on the End of product or service life. Here we try to Explore the ESOL in more detail.</p>



<p>End of service or product Life a is the decision of many times taken by the Business Representative – either a <a href="https://effectivepmc.net/blog/scrum-master/">Scrum Master</a> or a <a href="https://effectivepmc.net/blog/product-owner/">Product Owner</a> or a Sponsor. There could be various reasons for deciding the End of a service or a product Life which may include:</p>



<ul class="wp-block-list">
<li>Product being obsolete</li>
<li>Compliance or regulatory issues</li>
<li>Unable to sustain the cost of maintenance</li>
<li>Alternate service or product available</li>
<li>Competitive disadvantage and subsequent loss of business with no chance of revival</li>
<li>Technology being obsolete Etc</li>
</ul>



<p>Product or service End of Life is treated as a story like other conditions</p>



<ul class="wp-block-list">
<li>The story will say why e.g. no more business need, replacement by cheaper/simpler/more effective service</li>
<li>The story will say what the conditions are e.g.
<ul>
<li>what happens to any data / documentation / tools / other components?</li>
<li>how to ensure any replacement service is ready before close down old ?</li>
<li>When to close down and in which sequence</li>
</ul>
</li>
<li>Backing up the data related to the END OF service LIFE</li>
<li>Retention of data</li>
</ul>



<p>Other things which may need to be done are:</p>



<ul class="wp-block-list">
<li>Communication to the relevant stakeholders</li>
<li>Documentation to re-initiate the service – just in case there is a decision to re-initiate</li>
<li>Retrospective on why the service has to be terminated if the termination is due to performance reason</li>
<li>Releasing the resources back to Operations (e.g. used up servers, memory, database etc) so that no more cost is accrued for the service</li>
<li>Releasing the Human resources back to the HR</li>
<li>Creating a ROI report if relevant. Etc</li>
</ul>



<p>The End of Product or Service Life should never be ignored. The more the delay in deciding the End of Product or Service Life, it may lead to more accrual of the cost for product or service..</p>
<p>The post <a href="https://effectivepmc.net/blog/devops-end-of-service-product-life/">DevOps End of Service / Product Life</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Configuration Management in DevOps</title>
		<link>https://effectivepmc.net/blog/configuration-management-in-devops/</link>
		
		<dc:creator><![CDATA[Archana Shinde]]></dc:creator>
		<pubDate>Tue, 10 Nov 2020 14:37:51 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Configuration Management in DevOps]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[DevOps Certification]]></category>
		<category><![CDATA[DevOps Training]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=8467</guid>

					<description><![CDATA[<p>Configuration Management in DevOps Configuration management is all about trying to ensure that the files and software you are expecting to be on a machine are present, configured correctly, and working as intended. When you have only a single machine this is fairly simple. When you have five or ten servers, it is still possible [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/configuration-management-in-devops/">Configuration Management in DevOps</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h1>Configuration Management in DevOps</h1>
<p>Configuration management is all about trying to ensure that the files and software you are expecting to be on a machine are present, configured correctly, and working as intended.</p>



<p>When you have only a single machine this is fairly simple. When you have five or ten servers, it is still possible to do this manually, but it may take all day. However, when your infrastructure scales up into the thousands we need a better way of doing things.</p>



<h2 class="wp-block-heading">Version Control</h2>



<p>What is &#8220;version control&#8221;, and why should you care? Version control is a system that records changes to a file or set of files over time so that you can recall specific versions later. For the examples in this book you will use software source code as the files being version controlled, though in reality you can do this with nearly any type of file on a computer.</p>



<p>If you are a graphic or web designer and want to keep every version of an image or layout (which you would most certainly want to), a Version Control System (VCS) is a very wise thing to use. It allows you to revert files back to a previous state, revert the entire project back to a previous state, compare changes over time, see who last modified something that might be causing a problem, who introduced an issue and when, and more. Using a VCS also generally means that if you screw things up or lose files, you can easily recover. In addition, you get all this for very little overhead.</p>



<h3 class="wp-block-heading">Best Practices of Version Control</h3>



<ul class="wp-block-list">
<li><strong>Keep absolutely everything in version control : </strong>Developers should use version control for source code (of course), but also they should use it for tests, database scripts, build and deployment scripts, documentation, libraries and configuration files for your applications.</li>
<li><strong>Check In Regularly to Trunk : </strong>Once the changes are checked in into version control, they are available to the entire team.</li>
<li><strong>Using Meaningful Commit Messages : </strong>Always use detailed Multi-paragraph commit messages when you check in. This might save hours of debugging later on if an error happens. In a Multi-paragraph commit message, first paragraph is a high level details and the entire details in the remaining paragraphs.</li>
</ul>



<h3 class="wp-block-heading">Managing Components and Dependencies</h3>



<ul class="wp-block-list">
<li><strong>Managing External Libraries : </strong>External Libraries usually come in binary form, unless you are using a interpreted language. There are 2 reasonable ways of managing the libraries.
<ul>
<li>Check them into Version control. This approach is the simplest solution and will work fine for small projects. However, for larger projects and larger libraries, the approach may make the version control system too heavy and this approach may be unviable.</li>
<li>Another one is to declare them and use a tool like Maven or Ivy to download libraries from Internet repositories to your own artifact repository.</li>
</ul>
</li>
<li><strong>Managing Components : </strong>It is a good practice to split your application into smaller components. Doing so limits the scope of the changes to your application, reducing regression bugs. Also it encourages reuse and enables a much more efficient development process on large projects.</li>
</ul>



<h3 class="wp-block-heading">Managing Software Configuration</h3>



<p>Configuration is one of the three key parts that comprise an application along with binaries and its data. Configuration information can be used to change the behaviour of software at build time, deploy time and run time. Delivery teams need to consider carefully what configuration options should be available, how to manage them throughout the application life and how to ensure that the configuration is managed consistently across components, applications and technologies. You should treat the configuration of the system in the same way you treat your code. You should subject it to proper management and testing. There are three questions to consider when managing your application configuration:</p>



<ul class="wp-block-list">
<li>How do you represent your configuration information?</li>
<li>How do your deployment scripts access it?</li>
<li>How does it vary between environments, applications and versions of applications?</li>
</ul>



<p>Each configuration setting can be modeled as a tuple (A data structure consisting of multiple parts – typically an ordered set of values). Generally, the set of the tuples available and their values typically depend on three things</p>



<ul class="wp-block-list">
<li>The Application</li>
<li>The version of the application</li>
<li>The environment it runs on</li>
</ul>



<h3 class="wp-block-heading">Principles of managing Software Configurations</h3>



<p>Some of the <strong>principles</strong> of managing configuration are as follows</p>



<ul class="wp-block-list">
<li>Consider where in your application lifecycle it makes sense to infect a particular piece of configuration.</li>
<li>Keep the available configuration options for your application in the same repository as its source code.</li>
<li>Values of the configuration should be managed separately</li>
<li>Configurations should always be performed by automated process using values taken from your configuration repository.</li>
<li>Use clear naming conventions and avoid obscure naming conventions.</li>
<li>Do not repeat the information.</li>
<li>Be minimalist. Keep the configuration information as simple as possible.</li>
<li>Avoid over-engineering the configuration system.</li>
<li>Ensure you have tests for your configurations.</li>
</ul>
<p>The post <a href="https://effectivepmc.net/blog/configuration-management-in-devops/">Configuration Management in DevOps</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Amazon Elastic Block Store (EBS)</title>
		<link>https://effectivepmc.net/blog/amazon-elastic-block-store-ebs/</link>
		
		<dc:creator><![CDATA[Amit Kulkarni]]></dc:creator>
		<pubDate>Mon, 07 May 2018 19:04:26 +0000</pubDate>
				<category><![CDATA[Amazon Web Services (AWS)]]></category>
		<category><![CDATA[aws]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[devops]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=2126</guid>

					<description><![CDATA[<p>Amazon Elastic Block Store (EBS) Amazon Elastic Block Store (EBS) provides persistent block storage volumes for use with Amazon EC2 instances. As written in the EC2 post, the instance stores are temporary storage. The Instance stores may get deleted if the instance terminates or of there is a disk failure. Each EBS volume is automatically [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/amazon-elastic-block-store-ebs/">Amazon Elastic Block Store (EBS)</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>Amazon Elastic Block Store (EBS)</h1>
<p>Amazon Elastic Block Store (EBS) provides persistent block storage volumes for use with Amazon EC2 instances. As written in the EC2 post, the instance stores are temporary storage. The Instance stores may get deleted if the instance terminates or of there is a disk failure. Each EBS volume is automatically replicated within its availability zone to protect you from component failure, offering high availability and durability. Amazon Elastic Block Store (EBS) volumes are available in a variety of types that differ in performance characteristics and price.</p>
<h2>Types of Amazon Elastic Block Store (EBS) volumes</h2>
<h3>Magnetic Volumes</h3>
<p>They obviously have the lowest performance however, they are the cheapest. This is ideal where workloads or data is accessed infrequently.</p>
<h3>General Purpose SSD</h3>
<p>General purpose SSD provide cost-effective storage that is ideal for broad range of workloads. They deliver strong performance at moderate price point that is suitable for a wide range of workloads. General purpose SSD can provide a baseline performance upto 3000 IOPS.</p>
<h3>Provisioned IOPS SSD</h3>
<p>These are designed to meet the needs of I/O intensive workloads, particularly database workloads that are sensitive to storage performance and consistency. They can provide a speed of even 20K IOPS.</p>
<h2>Backup / Recovery</h2>
<p>You can back up the data on your Amazon EBS volumes, regardless of volume type, by taking point-in-time snapshot. Snapshots are incremental backups, which means that only the blocks on the device that have changed since your most recent snapshot are stored.</p>
<p>The post <a href="https://effectivepmc.net/blog/amazon-elastic-block-store-ebs/">Amazon Elastic Block Store (EBS)</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Amazon Elastic Compute Cloud (EC2)</title>
		<link>https://effectivepmc.net/blog/amazon-elastic-compute-cloud-ec2/</link>
		
		<dc:creator><![CDATA[Amit Kulkarni]]></dc:creator>
		<pubDate>Sat, 28 Apr 2018 17:18:13 +0000</pubDate>
				<category><![CDATA[Amazon Web Services (AWS)]]></category>
		<category><![CDATA[aws]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[devops]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=2105</guid>

					<description><![CDATA[<p>Amazon Elastic Compute Cloud (EC2) Amazon Elastic Compute Cloud (EC2) is a primary web service provided by AWS and it provides re-sizable compute capacity in the cloud. Compute refers to the computational power required to fulfill your work. Amazon EC2 allows you to acquired compute power through launching of virtual servers called as instances. When [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/amazon-elastic-compute-cloud-ec2/">Amazon Elastic Compute Cloud (EC2)</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>Amazon Elastic Compute Cloud (EC2)</h1>
<p>Amazon Elastic Compute Cloud (EC2) is a primary web service provided by AWS and it provides re-sizable compute capacity in the cloud. Compute refers to the computational power required to fulfill your work. Amazon EC2 allows you to acquired compute power through launching of virtual servers called as instances. When you launch an instance, you make use of the compute power as you need. AWS charges you based on the compute power that you have chosen. There are two considerations that you make while launching instances on AWS</p>
<ol>
<li>The amount of virtual hardware dedicated to the instance</li>
<li>The software loaded on the instance</li>
</ol>
<h2>Amazon Elastic Compute Cloud (EC2) Instance types</h2>
<p>There are many instances on AWS based on varying dimentsons of</p>
<ol>
<li>Virtual CPUs</li>
<li>Memory</li>
<li>Storage</li>
<li>Network Performance</li>
</ol>
<h3>Instance Families</h3>
<p>Instances are grouped into Families based on the ratio of the values of the dimensions mentioned above. Within each family, there are several choices that scale up linearly in size. Once you choose a instance family you cannot change it until you do some significant migration exercise.</p>
<p>Example of a instance family in AWS is shown below</p>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/05/AWS-Instance.png"><img decoding="async" class="alignnone size-full wp-image-2108" src="https://effectivepmc.net/wp-content/uploads/2018/05/AWS-Instance.png" alt="" width="430" height="449" /></a></p>
<p>For more information on <a href="https://aws.amazon.com/ec2/instance-types/" target="_blank" rel="noopener noreferrer">instance types of AWS, click on the link.</a></p>
<h2>Instance Pricing Options</h2>
<h3>On Demand EC2 Instances</h3>
<p>AWS allows a very flexible pricing option and this does not require any upfront commitment. The customer has full control over when the instance is launched and when it is terminated.</p>
<h3>Reserved Instances</h3>
<p>This option enables the customer to make capacity reservations for predictable workloads. By using reserved instances for these workloads, the customer can save a lot of money as compared to the On-Demand instances. There are two factors which determine the cost of reservation</p>
<ol>
<li>The term of commitment &#8211; Longer the commitment, the bigger the discount.</li>
<li>Payment Option &#8211; All upfront, Partial upfront, no upfront. All upfront gives maximum discount</li>
</ol>
<h3>Spot Instances</h3>
<p>This option is for workloads that are not time critical. Spot instances offer the greatest discount. With spot instances, the customers specify what price they are ready to pay. When customer&#8217;s bid price is more than the current Spot price, the customer will receive the requested instance.</p>
<h2>Amazon Elastic Cloud (EC2) Enhanced Networking</h2>
<p>For workloads requiring greater network performance, many instance types support Enhanced Networking. Enhanced networking reduces the impact of virtualization on network performance by enabling a capabilities which result in</p>
<ol>
<li>Improved packets per second (PPS)</li>
<li>Lower Latency</li>
<li>Less Jitter</li>
</ol>
<h2>Instance Store</h2>
<p>The instance storage is a temporary storage. The data in the instance store is lost when</p>
<ul>
<li>Underlying disk fails</li>
<li>The instance stops</li>
<li>The instance terminates</li>
</ul>
<p>You need to use the S3 or Elastic Block Store (EBS) for persistent data.</p>
<p>The post <a href="https://effectivepmc.net/blog/amazon-elastic-compute-cloud-ec2/">Amazon Elastic Compute Cloud (EC2)</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Deployment Pipeline</title>
		<link>https://effectivepmc.net/blog/deployment-pipeline/</link>
		
		<dc:creator><![CDATA[Snehamayee]]></dc:creator>
		<pubDate>Sat, 31 Mar 2018 18:30:13 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[CI/CD]]></category>
		<category><![CDATA[devops]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=1638</guid>

					<description><![CDATA[<p>Deployment Pipeline A Deployment pipeline is an important concept in Continuous Delivery. In Deployment pipeline we break the build process into distinct stages In each stage we get feedback to move into the next stage. It is a collaborative effort between various groups involved in delivering Agile Software Development. Often the first state in Deployment pipeline is [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/deployment-pipeline/">Deployment Pipeline</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>Deployment Pipeline</h1>
<p>A Deployment pipeline is an important concept in <a href="https://effectivepmc.net/blog/continuous-delivery/">Continuous Delivery</a>. In Deployment pipeline we break the build process into distinct stages In each stage we get feedback to move into the next stage.</p>
<p>It is a collaborative effort between various groups involved in delivering <a href="https://effectivepmc.net/blog/agile/">Agile Software Development</a>.</p>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/03/Deployment-Pipeline.png"><img decoding="async" class="alignnone size-full wp-image-1641" src="https://effectivepmc.net/wp-content/uploads/2018/03/Deployment-Pipeline.png" alt="Deployment Pipeline" width="626" height="479" /></a></p>
<p>Often the first state in Deployment pipeline is compiling the code and converting into binaries. After that, we run the automated tests. Depending on the scenario, there are stages like performance testing, security check, usability testing etc in a Deployment pipeline.</p>
<p>In <a href="https://effectivepmc.net/blog/what-is-devops/">DevOps</a> our aim is to automate all the stages of Deployment pipeline with a smooth running Deployment Pipeline, we can achieve the goal of Continuous Delivery.</p>
<p>At an abstract level, a deployment pipeline is an automated manifestation of your process for getting software from version control into the hands of your users. Every change to your software goes through a complex process on its way to being released. That process involves building the software, followed by its process of these builds through multiple stages of testing and deployment.</p>
<p><strong>Steps in the Deployment Pipeline include</strong></p>
<ul>
<li>The deployment pipeline begins when a developer commits code to a joint versioning system.</li>
<li>Prior to doing this commit, the developer will have performed a series of pre-commit tests on their local environment. The failure of the pre-commit tests of course means that the commit does not take place.</li>
<li>A commit then triggers an integration build of the service being developed. This build is tested by the integration tests.</li>
<li>If these tests are successful, the build is promoted to a quasi-production environment – the staging environment.</li>
<li>Then it is promoted to production under close supervision.</li>
<li>After another period of close supervision, it is promoted to normal production.</li>
<li>The specific tasks may vary a bit for different organization, For example, a small company may not have a staging environment or special supervision for recently deployed version. A larger company may have several different production environments for different purposes.</li>
<li>One way to define continuous integration is to have automatic triggers between one phase and the next, upto integration tests. That is, if the build is successful then integration tests are triggered. If not, the developer responsible for the failure is notified.</li>
<li>Continuous delivery is defined as having automated triggers as far as the staging system.</li>
<li>Once a service is deployed into production it is closely monitored for a period and then it is promoted into normal production.</li>
<li>At this final stage, monitoring and testing still exist but the service is no different from other services in this regard.</li>
<li>The committed code moves through the different steps described above but the code does not move on its own. Rather, it is moved by tools. The tools are controlled by their programs (scripts) or by developer/operator commands.</li>
<li>Traceability is of utmost importance in this movement. Traceability means that, for any system in production, it is possible to determine exactly how it came to be in the production. This means keeping track not only of source code but also of all commands to all the tools that acted on the elements of the system. Individual commands are difficult to trace, therefore, it is better to use scripts instead of commands.</li>
<li>Treating infrastructure-as-code means that the infrastructure-scripts and movement-scripts should be subject to the same quality control as application source code. It should be regulated in same fashion.</li>
</ul>
<h2>A Basic Deployment Pipeline</h2>
<p>The above figure shows a typical deployment pipeline and captures the essence of the approach.</p>
<ul>
<li>The Process starts with the developers committing changes into their version control At this point, the continuous integration management systems responds to the commit by triggering a new instance of our pipeline. The first (commit) stage of the pipeline</li>
</ul>
<p>o   compiles the code,</p>
<p>o   runs unit tests,</p>
<p>o   performs code analysis and</p>
<p>o   creates installers</p>
<ul>
<li>The Second stage is typically composed of longer-running automated acceptance tests. Again your CI server should let you split these tests into suites which can be executed in parallel to increase their speed and give you feedback faster – typically within an hour or two. This stage will be triggered automatically by the successful completion of the first stage in your pipeline.</li>
<li>At this point, the pipeline branches to enable independent deployment of your build to various environments – in this case, UAT, Capacity testing and Production.</li>
</ul>
<h3>Commit Stage</h3>
<p>A new instance of your deployment pipeline is created upon every check-in and, if the first stage passes, results in the creation of a release candidate. The aim of the first stage in the pipeline is to eliminate builds that are unfit for production and signal the team that the application is broken as quickly as possible. We want to expend a minimum of time and effort on a version of the application that is obviously broken. So, when a developer commits a change to the version control system, we want to evaluate the latest version of the application quickly. The developer who checked in then waits for the results before moving on to the next task. There are a few things we want to do as part of the commit stage</p>
<ul>
<li>Compile the code (if necessary)</li>
<li>Run a set of commit tests</li>
<li>Create binaries for use by the later stages</li>
<li>Perform analysis of the code to check its health</li>
<li>Prepare artifacts such as test databases for use by the later stages.</li>
</ul>
<p>If you only implement a commit stage in your development process, it usually represents an enormous step forward in the reliability and quality of the outputs of your teams. However, there are several more stages necessary to complete what we consider to be a minimal deployment pipeline.</p>
<h3>Automated Acceptance Test Gate</h3>
<p>A comprehensive commit test suite is an excellent litmus test for many classes of errors, but there is much that it won’t catch. Unit tests, which comprise the vast majority of the commit tests are so coupled to the low level API that it is often hard for the developers to avoid the trap that the solution works in a particular way rather than asserting that it solves a particular problem.</p>
<p>Commit tests that run against every check-in provides us with timely feedback on problems with the latest builds and on bugs in our application. But, without running acceptance tests in a production-like environments, we know nothing about whether the application meets the customer specifications, nor  whether it can be deployed and survive in the real world. If we want timely feedback on these topics we must extend the range of our continuous integration process to test and rehearse these aspects of our system too.</p>
<p>The goal of the acceptance test stage is to assert that the system delivers the value the customer is expecting and that it meets the acceptance criteria. The acceptance test stage also serves as regression test suite, verifying that no bugs are introduced into existing behavior by the new changes.</p>
<p>The automated acceptance test gate is the second significant milestone in the lifecycle of a release candidate.</p>
<h3>Subsequent Test Stages</h3>
<p>The acceptance test stage is a significant milestone in the lifecycle of a release candidate. Once this stage has been completed, a successful release candidate has moved on from something that is largely the domain of the development team to something of wider interest and use.</p>
<p>For the simplest deployment pipelines, a build that has passed acceptance testing is ready for release to users, at least as far as the automated testing of the system is concerned. If the candidate fails this stage, it by definition is not fit to be released.</p>
<p>The progression of the release candidate to this point has been automatic with successful candidates being automatically promoted to the next stage. If you are delivering software incrementally it is possible to have an automated deployment to production. But for many systems, some form of manual testing is desirable before release, even when you have a comprehensive set of automated tests. Many projects have environments for testing integration with other systems, environments for testing capacity, exploratory testing environments and staging/production-like environments. Each of these environments can be more or less production-like and have their own unique configuration.</p>
<h3>Preparing to Release</h3>
<p>There is a business risk associated with every release of a production system. At best, if there is a serious problem at the point of release, it may delay the introduction of valuable new capabilities. At worst, if there is no sensible back-out plan in place, it may leave the business without mission-critical resources because they had to be decommissioned as part of the release of the new system. The mitigation of these problems is very simple when we view the release step as a natural outcome of our deployment pipeline. Fundamentally we want to</p>
<ul>
<li>Have a release plan that is created and maintained by everybody involved in the delivering of the software including developers and testers as well as operations, infrastructure and support personnel.</li>
<li>Minimize the effect of people making mistakes by automating as much of the process as possible, starting with the most error-prone stages</li>
<li>Rehearse the procedure often in production-like environments so that you can debug the process as possible, starting with the most error-prone stages</li>
<li>Rehearse the procedure often in production-like environments so that you can debug the process and the technology supporting it</li>
<li>Have the ability to back out a release if things don’t go according to plan</li>
<li>Have a strategy for migrating configuration and production data as part of the upgrade and rollback processes.</li>
</ul>
<h2>Implementing a Deployment Pipeline</h2>
<p>Whether you are staring a new project from scratch or trying to create an automated pipeline for an existing system, you should generally take an incremental approach to implementing a deployment pipeline. In general the steps look like this</p>
<ul>
<li>Model your value stream and create a walking skeleton</li>
<li>Automate the build and deployment process</li>
<li>Automate unit tests and code analysis</li>
<li>Automate acceptance tests</li>
<li>Automate releases</li>
</ul>
<h2>Build and Deployment Scripting</h2>
<p>One of the core principles of environment management is that changes to testing and production environments should only be made through an automated process. That means that you should not log into such systems remotely or perform deployments. They should always be entirely scripted.</p>
<p>Some of the principle of Build and Deployment Pipeline Scripting are as follows</p>
<p><strong><u>Create a Script for Each Stage in your Deployment Pipeline</u></strong></p>
<p>Taking this approach ensures that our scripts have a well-defined structure that helps us to keep them clean during maintenance and minimizes dependencies between components of our build and deployment system. Luckily, the deployment pipeline provides an excellent organizing principle for dividing up responsibilities between build scripts. When you first start your project, it makes sense to have a single script containing every operation that will be performed in the course of executing the deployment pipeline, with dummy targets for steps that are not yet automated. However, once your script gets sufficiently long, you can divide it up into separate scripts for each stage in your pipeline. Thus you will have a commit script containing all the targets required to compile your application, package it, run the commit test suite, and perform static analysis of the code. You then need a functional acceptance test script that calls your deployment tool to deploy the application to the appropriate environment, then prepare any data, and finally runs the acceptance tests.</p>
<p><strong><u>Use an Appropriate Technology to Deploy Your Application </u></strong></p>
<p>In a typical deployment pipeline, most stages that follow a successful commit stage, such as the automated acceptance test stage and user acceptance test stage, depend upon the application being deployed to a production-like environment. It is vital that this deployment is automated too. However, you should use the right tool for the job when automating deployment, not a general-purpose scripting language (unless the deployment process is extremely simple).</p>
<p><strong><u>Use the Same Scripts to Deploy to Every Environment </u></strong></p>
<p>It is essential to use the same process to deploy to every environment in which your application runs to ensure that the build and deployment process is tested effectively. That means using the same scripts to deploy to each environment and representing the differences between environments—such as service URIs and IP addresses—as configuration information to be managed separately. Separate out configuration information from the script and store it in version control, providing some mechanism for your deployment script to retrieve it. It is essential that both build and deployment scripts work on developers’ machines as well as on production-like environments, and that they are used to perform all build and deployment activities by developers.</p>
<p><strong><u>Ensure the Deployment Process Is Idempotent </u></strong></p>
<p>Your deployment process should always leave the target environment in the same (correct) state, regardless of the state it finds it in when starting a deployment. The simplest way to achieve this is to start with a known-good baseline environment, provisioned either automatically or through virtualization. This environment should include all the appropriate middleware and anything else your application requires to work. Your deployment process can then fetch the version of the application you specify and deploy it to this environment, using the appropriate deployment tools for your middleware.</p>
<p><strong><u>Evolve Your Deployment System Incrementally </u></strong></p>
<p>Everyone can see the appeal of a fully automated deployment process: “Release your software at the push of a button.” You don’t have to have completed all of the steps to get value from your work. The first time you write a script to deploy the application in a local development environment and share it with the team, you have saved lots of work of individual developers. Start by getting the operations team to work with developers to automate deployment of the application into a testing environment. Make sure that the operations people are comfortable with the tools being used to deploy. Ensure that developers can use the same process to deploy and run the application in their development environments. Then, move on to refining these scripts so they can be used in the acceptance test environment to deploy and run the application so that the tests can be run. Then, move further down the deployment pipeline and ensure the operations team can use the same tools to deploy the application into staging and production.</p>
<p>The post <a href="https://effectivepmc.net/blog/deployment-pipeline/">Deployment Pipeline</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>DevOps Perspective of Infrastructure and Environments</title>
		<link>https://effectivepmc.net/blog/infrastructure-and-environments/</link>
		
		<dc:creator><![CDATA[Archana Shinde]]></dc:creator>
		<pubDate>Sat, 24 Mar 2018 18:47:36 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[infrastructure]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=1649</guid>

					<description><![CDATA[<p>DevOps Perspective of Infrastructure and Environments There are three steps to deploying software Creating and Managing the infrastructure in which your application will run (hardware, networking, middleware, external services) Installing correct version of your application into it Configuring the application including any data or state it requires. Creating and managing the infrastructure It is a [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/infrastructure-and-environments/">DevOps Perspective of Infrastructure and Environments</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>DevOps Perspective of Infrastructure and Environments</h1>
<p>There are three steps to deploying software</p>
<ul>
<li>Creating and Managing the infrastructure in which your application will run (hardware, networking, middleware, external services)</li>
<li>Installing correct version of your application into it</li>
<li>Configuring the application including any data or state it requires.</li>
</ul>
<h2>Creating and managing the infrastructure</h2>
<p>It is a known fact that most projects fail due to people problems rather than technical problems. Almost all medium and large companies separate the activities of development and infrastructure management (or Operations) into two groups or silos. It is often the case that these two group of stakeholders have an uneasy relationship. This is because development teams are incentivized to deliver changes as rapidly as possible whereas operations teams aim for stability.</p>
<p>Probably the most important thing to keep in mind is that all stakeholders have a common goal : making the release of valueable software a low-risk activity. We have discussed this earlier that best way to do this is to keep releasing small increments of software through creation of Continuous Delivery pipeline. This ensures that there is as little changes as possible between releases. Given this context, here are some of the most important high-level concerns of operations teams.</p>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/03/wall-of-confusion.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1653" src="https://effectivepmc.net/wp-content/uploads/2018/03/wall-of-confusion.png" alt="Wall of confusion" width="628" height="438" /></a></p>
<ul>
<li><strong>Documentation and Auditing</strong> : Operations managers want to ensure that any changes to any environment they control is audited and documented so that if things go wrong they can find the relevant changes that caused the problem.</li>
<li><strong>Alerts for Abnormal Events</strong> : Operations managers will have systems in place to monitor their infrastructure and the applications running, and will want to be alerted when an abnormal conditions occurs in any of the systems they manage so that they can minimize any downtime.</li>
<li><strong>Logging</strong> : Your applications should log a WARNING level every time a network connection times out or is found to be unexpectedly closed. You should log a INFO or DEBUG level every time you close a connection. If something goes wrong, the Operations team should have forensic tools available which would help them with creating the event in their testing environment so that they can prevent it from happening going forward.</li>
<li><strong>IT Service continuity Planning </strong>: Operations managers will be involved in the creation, implementation, testing and maintenance of their organizational IT service continuity plan. You should have tested performing backups, recovery and archiving of your applications data as part f the business continuity testing, as well as retrieving and deploying of any given version of your applications and providing the operations team with the process for performing each of these activities as part of your release plan.</li>
<li><strong>Use the Technology the Operations team is familiar with</strong>: Operations team wants changes to be made to their environments using technology that is familiar to their team so that they can own and maintain their environments. It is required that Development team and operations team sit down at the beginning of every project and decide how deployments of the application will be performed. It may be necessary for either the operations team or the software development team to learn an agreed upon technology – perhaps a scripting language such as Perl, Ruby or Python. It is extremely important that both teams understand the deployment system, because the same process must be used to deploy changes to every environment.</li>
</ul>
<h3>Modeling and Managing Infrastructure</h3>
<p>There are many different classes of configuration information at play in any environment, all of which should be provisioned and managed in an automated fashion.</p>
<p>Even if you do not have control over the selection of your infrastructure, If you intend to fully automate your build, integration, testing and deployment, you must address the following questions.</p>
<ul>
<li>How will we provision our infrastructure?</li>
<li>How will we deploy and configure the various bits of software that forms the part of our infrastructure?</li>
<li>How do we manage our infrastructure once it is provisioned and configured?</li>
</ul>
<p>As with every other aspects of your delivery process, you should keep everything you need to create and maintain your infrastructure under version control. At least, that means,</p>
<ul>
<li>Operating System install definitions</li>
<li>Configuration for data center automation tools such as Puppet</li>
<li>General infrastructure configurations such as DNS files, SMTP settings etc</li>
<li>Any Scripts you use for managing your insfrastructure.</li>
</ul>
<p>These files in version control form inputs to the deployment pipeline the same way the source control does. The job of the deployment pipeline in the case of infrastructural changes is threefold</p>
<ul>
<li>First it should verify that all application will work with any infrastructural changes before they get pushed out to production environment, ensuring that every affected application functional and non functional tests pass against the new version of the infrastructure.</li>
<li>Second, it should be used to push changes out to operations-managed testing and production environments.</li>
<li>Third, the pipeline should perform deployment tests to ensure that new infrastructure configuration has been deployed successfully.</li>
</ul>
<p>Following are some of the things which are necessary in controlling your infrastructure configuration</p>
<ul>
<li><strong>Controlling access to your Infrastructure</strong> :</li>
</ul>
<p>o   Controlling access to prevent anyone from making a change without approval</p>
<p>o   Defining an automated process for making changes to your infrastructure</p>
<p>o   Monitoring your infrastructure to detect any issues as soon as they occur</p>
<ul>
<li><strong>Making Changes to Infrastructure</strong></li>
</ul>
<p>o   Every change whether its updating firewall or deploying a new version of yours service, should go through the same change management process</p>
<p>o   This process should be managed using a single ticketing system that everybody can log into</p>
<p>o   The exact change that is made should be logged so that it can be easiy audited</p>
<p>o   It should be possible to see a history of changes made to every environment including deployments</p>
<p>o   The changes you want to make should have been tested on one of your production-like testing environments</p>
<p>o   The changes should be made to version control and then applied through your automated process for deploying infrastructural changes</p>
<p>o   There should be a test to verify that change has worked</p>
<h3>Managing Server Provisioning and Configuration</h3>
<p>Provisioning servers and managing their configuration is often overlooked in small and even medium-sized organizations.</p>
<ul>
<li><strong>Provisioning Servers : </strong>At a high level, provisioning servers – whether for testing or production environments starts with putting a new box in your data center and wiring it in. There are several ways of creating operating system baselines</li>
</ul>
<p>o   A fully manual process</p>
<p>o   Automated remote installation</p>
<ul>
<li><strong>Virtualization</strong> : The fundamental enabler of the cloud is virtualizationover hundreds of thousands of hosts accessible over the internet. In cloud computing, a virtual machine (VM) is an emulation of a physical machine. A VM image is a file that contains a bootable operating system and some software installed on it. A VM image provides the information required to launch a VM.</li>
</ul>
<p>Three of the unique aspects of the cloud that impact DevOps are</p>
<ul>
<li>The ability to create and switch environments</li>
<li>The ability to create VMs easily</li>
<li>Management of Databases</li>
</ul>
<p>Virtualization has following benefits</p>
<p>o   Fast Response to changing requirements</p>
<p>o   Consolidations</p>
<p>o   Standardizing hardware</p>
<p>o   Easier to maintain baselines</p>
<ul>
<li><strong>Ongoing Management of Servers</strong> : Once you have the Operating System installed, you will need to ensure that the configuration doesn’t change in an uncontrolled manner. This means ensuring first that nobody is able to log into the boxes except the operations team and second, that any changes are performed using an automated system. That includes applying Os Service packs, upgrades, installing new software, changing settings or performing deployments.</li>
<li><strong>Highly Parallel Testing with Virtual Environments </strong>: Virtualization provides an excellent way to handle multi-platform testing. Simply create virtul machines with examples of each of the potential environments that your application targets and create VM templates from them. Then run all of the stages in your pipeline on all of them in parallel.</li>
</ul>
<h2>Managing Data</h2>
<p>Data and its management and organization pose a particular set of problems for testing and deployment processes for two reasons.</p>
<ul>
<li>First, there is the sheer volume of information that is generally involved. The bytes allocated to encoding the behavior of our application—its source code and configuration information—are usually vastly outweighed by the volume of data recording its state.</li>
<li>Second is the fact that the lifecycle of application data differs from that of other parts of the system. Application data needs to be preserved—indeed, data usually outlasts the applications that were used to create and access it. Crucially, data needs to be preserved and migrated during new deployments or rollbacks of a system.</li>
</ul>
<p>In most cases, when we deploy new code, we can erase the previous version and wholly replace it with a new copy. In this way we can be certain of our starting position. While that option is possible for data in a few limited cases, for most real-world systems this approach is impossible. Once a system has been released into production, the data associated with it will grow, and it will have significant value in its own right. Indeed, arguably it is the most valuable part of your system. This presents problems when we need to modify either the structure or the content. As systems grow and evolve, it is inevitable that such modifications will be required, so we must put mechanisms into place that allow changes to be accomplished while minimizing disruption and maximizing the reliability of the application and of the deployment process. The key to this is automating the database migration process. A number of tools now exist that make automating of data migration relatively straightforward, so that it can be scripted as part of your automated deployment process. These tools also allow you to version your database and migrate it from any version to any other. This has the positive effect of decoupling the development process from the deployment process—you can create a migration for each database change required, even if you don’t deploy every schema change independently. It also means that your database administrators (DBAs) don’t need a big up-front plan—they can work incrementally as the application evolves.</p>
<h3>Database Scripting</h3>
<p>As with any other change to your system, any changes to any databases used as part of your build, deploy, test, and release process should be managed through automated processes. That means that database initialization and all migrations need to be captured as scripts and checked into version control. It should be possible to use these scripts to manage every database used in your delivery process, whether it is to create a new local database for a developer working on the code, to upgrade a systems integration testing (SIT) environment for testers, or to migrate production databases as part of the release process. Of course, the schema of your database will evolve along with your application. This presents a problem because it is important that the database has the correct schema to work with a particular version of your application. For example, when deploying to staging, it is essential to be able to migrate the staging database to the correct schema to work with the version of the application being deployed. Careful management of your scripts makes this possible. Finally, your database scripts should also be used as part of your continuous integration process. While unit tests should not, by definition, require a database in order to run, any kind of meaningful acceptance tests running against a database-using application will require the database to be correctly initialized. Thus, part of your acceptance test setup process should be creating a database with the correct schema to work with the latest version of the application and loading it with any test data necessary to run the acceptance tests. A similar procedure can be used for later stages in the deployment pipeline.</p>
<p><strong><u>Initializing Databases </u></strong></p>
<p>An extremely important aspect of our approach to delivery is the ability to reproduce an environment, along with the application running in it, in an automated fashion. Without this ability, we can’t be certain that the system will behave in the way we expect. This aspect of database deployment is the simplest to get right and to maintain as your application changes through the development process. Almost every data management system supports the ability to initialize a data store, including schemas and user credentials, from automated scripts. So, creating and maintaining a database initialization script is a simple starting point. Your script should first create the structure of the database, database instances, schemas, and so on, and then populate the tables in the database with any reference data required for your application to start.</p>
<p>The simplest process for deploying a database afresh is as follows</p>
<ul>
<li>Erase what was there before</li>
<li>Create the database structure, database instances, schemas, etc</li>
<li>Load the database with data</li>
</ul>
<h3>Incremental Change</h3>
<p>Continuous integration demands that we are able to keep the application working after every change made to it. This includes changes to the structure or content of our data. Continuous delivery demands that we must be able to deploy any successful release candidate of our application, including the changes to the database, into production (the same is also true for user-installed software that contains a database). For all but the simplest of systems, that means having to update an operational database while retaining the valuable data that is held in it. Finally, due to the constraint that the data in the database must be retained during a deployment, we need to have a rollback strategy should a deployment go wrong for some reason.</p>
<h4>Versioning Your Database</h4>
<p>The most effective mechanism to migrate data in an automated fashion is to version your database. Simply create a table in your database that contains its version number. Then, every time you make a change to the database, you need to create two scripts: one that takes the database from a version x to version x + 1 (a roll-forward script), and one that takes it from version x + 1 to version x (a roll-back script). You will also need to have a configuration setting for your application specifying the version of the database it is designed to work with (this can be kept as a constant in version control and updated every time a database change is required). At deployment time, you can then use a tool which looks at the version of the database currently deployed and the version of the database required by the version of the application that is being deployed. The tool will then work out which scripts to run to migrate the database from its current version to the required version, and run them on the database in order. For a roll forward, it will apply the correct combination of roll-forward scripts, from oldest to newest; for a roll back, it will apply the relevant roll-back scripts in reverse order.</p>
<h4>Managing Orchestrated Changes</h4>
<p>Managing Orchestrated Changes In many organizations, it is common to integrate all applications through a single database. This is not a practice we recommend; it’s better to have applications talk to each other directly and factor out common services where necessary (as, for example, in a service-oriented architecture). However, there are situations in which it either makes sense to integrate via the database, or it is simply too much work to change your application’s architecture. In this case, making a change to a database can have a knock-on effect on other applications that use the database. First of all, it is important to test such changes in an orchestrated environment—in other words, in an environment in which the database is reasonably production-like, and which hosts versions of the other applications that use it. Such an environment is often known as a systems integration testing (SIT) environment, or alternatively staging. In this way, assuming tests are frequently run against the other applications that use the database, you will soon discover if you have affected another application.</p>
<h3>Rollback Databases and Zero Downtime releases</h3>
<p>Once you have roll-forward and roll-back scripts for each version of your application, it is relatively easy to use an application at deploy time to migrate your existing database to the correct version required by the version of the application you are deploying.</p>
<h4>Rolling Back without Losing Data</h4>
<p>In the case of a rollback, your roll-back scripts (as described in the previous section) can usually be designed to preserve any transactions that occur after the upgrade took place.</p>
<h4>Decoupling Application Deployment from Database Migration</h4>
<p>Another strategy is to decouple the database migration process from the application deployment process and perform them independently.</p>
<h2>Configuration Management</h2>
<p>Configuration management is all about trying to ensure that the files and software you are expecting to be on a machine are present, configured correctly, and working as intended.</p>
<p>When you have only a single machine this is fairly simple. When you have five or ten servers, it is still possible to do this manually, but it may take all day. However, when your infrastructure scales up into the thousands we need a better way of doing things.</p>
<h3>Version Control</h3>
<p>What is &#8220;version control&#8221;, and why should you care? Version control is a system that records changes to a file or set of files over time so that you can recall specific versions later. For the examples in this book you will use software source code as the files being version controlled, though in reality you can do this with nearly any type of file on a computer.</p>
<p>If you are a graphic or web designer and want to keep every version of an image or layout (which you would most certainly want to), a Version Control System (VCS) is a very wise thing to use. It allows you to revert files back to a previous state, revert the entire project back to a previous state, compare changes over time, see who last modified something that might be causing a problem, who introduced an issue and when, and more. Using a VCS also generally means that if you screw things up or lose files, you can easily recover. In addition, you get all this for very little overhead.</p>
<h3>Best Practices of Version Control</h3>
<ul>
<li><strong>Keep absolutely everything in version control : </strong>Developers should use version control for source code (of course), but also they should use it for tests, database scripts, build and deployment scripts, documentation, libraries and configuration files for your applications.</li>
<li><strong>Check In Regularly to Trunk : </strong>Once the changes are checked in into version control, they are available to the entire team.</li>
<li><strong>Using Meaningful Commit Messages : </strong>Always use detailed Multi-paragraph commit messages when you check in. This might save hours of debugging later on if an error happens. In a Multi-paragraph commit message, first paragraph is a high level details and the entire details in the remaining paragraphs.</li>
</ul>
<h3>Managing Components and Dependencies</h3>
<ul>
<li><strong>Managing External Libraries : </strong>External Libraries usually come in binary form, unless you are using a interpreted language. There are 2 reasonable ways of managing the libraries.</li>
</ul>
<p>o   Check them into Version control. This approach is the simplest solution and will work fine for small projects. However for larger projects and larger libraries, the approach may make the version control system too heavy and this approach may be unviable.</p>
<p>o   Another one is to declare them and use a tool like Maven or Ivy to download libraries from Internet repositories to your own artifact repository.</p>
<ul>
<li><strong>Managing Components : </strong>It is a good practice to split your application into smaller components. Doing so limits the scope of the changes to your application, reducing regression bugs. Also it encourages reuse and enables a much more efficient development process on large projects.</li>
</ul>
<h3>Managing Software Configuration</h3>
<p>Configuration is one of the three key parts that comprise an application along with binaries and its data. Configuration information can be used to change the behavior of software at build time, deploy time and run time. Delivery teams need to consider carefully what configuration options should be available , how to manage them throughout the application life and how to ensure that the configuration is managed consistently across components, applications and technologies. You should treat the configuration of the system in the same way you treat your code. You should subject it to proper management and testing. There are three questions to consider when managing your application configuration:</p>
<ul>
<li>How do you represent your configuration information?</li>
<li>How do your deployment scripts access it?</li>
<li>How does it vary between environments, applications and versions of applications?</li>
</ul>
<p>Each configuration setting can be modeled as a tuple (A data structure consisting of multiple parts – typically an ordered set of values). Generally the set of the tuples available and their values typically depend on three things</p>
<ul>
<li>The Application</li>
<li>The version of the application</li>
<li>The environment it runs on</li>
</ul>
<h3>Principles of managing Software Configurations</h3>
<p>Some of the <strong>principles</strong> of managing configuration are as follows</p>
<ul>
<li>Consider where in your application lifecycle it makes sense to infect a particular piece of configuration.</li>
<li>Keep the available configuration options for your application in the same repository as its source code.</li>
<li>Values of the configuration should be managed separately</li>
<li>Configurations should always be performed by automated process using values taken from your configuration repository.</li>
<li>Use clear naming conventions and avoid obscure naming conventions.</li>
<li>Do not repeat the information.</li>
<li>Be minimalist. Keep the configuration information as simple as possible.</li>
<li>Avoid over-engineering the configuration system.</li>
<li>Ensure you have tests for your configurations.</li>
</ul>
<p><strong>Please refer to the following links to know more about DevOps Infrastructure</strong></p>
<ul>
<li><a href="https://effectivepmc.net/blog/infrastructure-as-code/">Infrastructure as Code</a></li>
</ul>
<p>The post <a href="https://effectivepmc.net/blog/infrastructure-and-environments/">DevOps Perspective of Infrastructure and Environments</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Continuous Delivery</title>
		<link>https://effectivepmc.net/blog/continuous-delivery/</link>
		
		<dc:creator><![CDATA[Amit Kulkarni]]></dc:creator>
		<pubDate>Wed, 21 Mar 2018 12:51:51 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[devops]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=1518</guid>

					<description><![CDATA[<p>Continuous Delivery Continuous Integration is an enormous step forward in productivity and quality for most projects that adopt it. It ensures that the teams working together to create large and complex systems can do so with higher level of confidence and control then is achievable without it. CI ensures that the code that we create [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/continuous-delivery/">Continuous Delivery</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>Continuous Delivery</h1>
<p>Continuous Integration is an enormous step forward in productivity and quality for most projects that adopt it. It ensures that the teams working together to create large and complex systems can do so with higher level of confidence and control then is achievable without it. CI ensures that the code that we create (as a team) works by providing us with rapid feedback on any problems that we may introduce with the changes we commit. It is primarily focused on asserting that the code compiles successfully and passes a body of unit and acceptance tests. <strong><em><u>However, CI is not enough.</u></em></strong></p>
<p>CI mainly focuses on development teams. The output of the CI system normally forms the input to the manual testing process and rest of the release process. Much of the waste in releasing software comes from the progress of software through testing and operations. For example, it is common to see</p>
<ul>
<li>Build and Operations teams waiting for documentation or fixes</li>
<li>Testers waiting for “good” builds of the software</li>
<li>Development teams receiving bug reports weeks after the team has moved on to the new functionality</li>
<li>Discovering, towards the end of the development process, that the application’s architecture will not support the system’s non functional requirements</li>
</ul>
<p>This leads to software that is undeployable because it has taken so long to get it into a production-like environment, and buggy because the feedback cycle between the development team and the testing and operations team is so long.</p>
<p>In Continuous Delivery, a software team plans to deliver software in short cycles. They perform development, testing and release in such a short time that incremental changes can be easily delivered to the production. In Continuous Delivery, as a <a href="https://effectivepmc.net/blog/what-is-devops/">DevOps</a> we create a repeatable deployment process that can help achieve the objective of continuous Delivery.</p>
<p>Getting a Deployment Pipeline is the basis of Continuous Delivery. Some of the basics about deployment pipeline are</p>
<ul>
<li>The purpose of the deployment pipeline is to give everyone involved in delivering software visibly into the progress of builds from check-ins to release.</li>
<li>It should be possible to see which changes have broken the application and which resulted in release candidates suitable for manual testing or release.</li>
<li>Your implementation should make it possible to perform push-button deployments into manual testing environments.</li>
<li>Choosing to release a particular version of your application should also be a push-button task that can be performed with full knowledge that the release candidate being deployed has passed the entire pipeline successfully and hence has had a battery of automated and manual tests performed on it in a production-like environment.</li>
<li>Once you have deployment pipeline implemented, inefficiencies in your release process will become obvious. All kinds of useful information can be derived from a working deployment pipeline, such as how long it takes a release candidates to get through the various manual testing stages. Once you have this information, you can work to optimize the value-stream.</li>
<li>There is no one size fits all solution to the complex problem of implementing a deployment pipeline. The crucial point is to create a system of record that manages each changes from check-in to release, providing the information you need to discover problems as early as possible in the process.</li>
<li>A Deployment pipeline, in turn, depends on having some foundations in place</li>
</ul>
<p>o   A good configuration management</p>
<p>o   Automated scripts for building and deploying your application</p>
<p>o   Automated tests to prove that your application will deliver value to its users</p>
<p>o   Discipline to ensure that only changes that have passed through the automated build, test and deployment system get released.</p>
<h2><a href="https://effectivepmc.net/wp-content/uploads/2018/03/Continuous-Delivery.png"><img loading="lazy" decoding="async" class="alignnone wp-image-1519 size-full" src="https://effectivepmc.net/wp-content/uploads/2018/03/Continuous-Delivery.png" alt="Continuous Delivery Steps" width="372" height="350" /></a></h2>
<h2>Plan</h2>
<p>This adoption path consists of one practice that focuses on establishing business goals and adjusting them based on customer feedback: <em>continuous business planning.</em></p>
<p>Businesses today need to be agile and able to react quickly to customer feedback. Achieving this goal centers on an organization’s ability to do things right. Unfortunately, traditional approaches to product delivery are too slow for today’s speed of doing business, partially because these approaches depend on custom development and manual processes and because teams are operating in silos. Information required to plan and re-plan quickly, while maximizing the ability to deliver value, is fragmented and inconsistent. Often the right feedback isn’t received early enough to achieve the right level of quality to truly deliver value.</p>
<p>Teams also struggle to incorporate feedback that should inform the prioritization of investments and then to collaborate as an organization to drive execution in a continuous delivery model. For some teams, planning is viewed as governance overhead that’s intrusive and slows them down instead of an activity that enables them to deliver value with speed. Faster delivery provides greater business agility, but you must also manage speed with the trust and confidence that what you’ve delivered is the right thing. You can’t deliver software at speed if you don’t trust the accuracy of your business goals, your measurements, and your platforms.</p>
<p>DevOps helps to reconcile these competing perspectives, helping teams collaboratively establish business goals and continuously change them based on customer feedback thereby improving both agility and business outcomes. At the same time, businesses need to manage costs. By identifying and eliminating waste in the development process, the team becomes more efficient but also addresses cost. This approach helps teams strike an optimal balance between all these considerations, across all phases of the DevOps life cycle in moving to a continuous delivery model.</p>
<h2>Code/Build/Test</h2>
<p>This adoption path involves two practices: collaborative development and continuous testing. As such, it forms the core of development and quality assurance (QA) capabilities.</p>
<p><strong><u>Collaborative Development</u></strong></p>
<p>Software delivery efforts in an enterprise involve large numbers of cross-functional teams, including lines-of-business owners, business analysts, enterprise and software architects, developers, QA practitioners, operations personnel, security specialists, suppliers, and partners. Practitioners from these teams work on multiple platforms and may be spread across multiple locations. <em>Collaborative development </em>enables these practitioners to work together by providing a common set of practices and a common platform they can use to create and deliver software.</p>
<p>One core capability included within collaborative development is <em>continuous integration</em><em> </em>a practice in which software developers continuously or frequently integrate their work with that of other members of the development team.</p>
<p><strong><u>Continuous Testing</u></strong></p>
<p><em>Continuous testing </em>means testing earlier and continuously across the life cycle. This results in reduced costs, shortened testing cycles, and achieved continuous feedback on quality. This process is also known as <em>shift-left testing </em>, which stresses integrating development and testing activities to ensure quality is built-in as early in the life cycle as possible and not something left to later. This is facilitated by adopting capabilities like automated testing and service virtualization. Service virtualization is the new capability for simulation of production-like environments and makes continuous testing feasible.</p>
<p><strong><u>Micro-Services Architecture</u></strong></p>
<p>An organization can introduce continuous deployments without major architectural modifications. The continuous deployments reduce the time required to place a component into production, however, requires architectural support</p>
<ul>
<li>Deploying without the necessity of explicit coordination with other teams reduces the time required to place a component into production.</li>
<li>Allowing for different versions of the same service to be simultaneously in production leads to different team members deploying without coordination with other members of their team.</li>
<li>Rolling back a deployment in the event of errors allows for various forms of live testing.</li>
</ul>
<p>Microservice architecture is an architectural style that satisfies these requirements. This style is used in practice by organizations that have adopted or inspired by many DevOps practices. Although project requirements may cause deviations to this style, it remains a good general basis for projects that are adopting DevOps practices.</p>
<p>A Microservice architecture consists of a collection of services where each service provides a small amount of functionality and the total functionality of the system is derived from composing multiple versions of a service in production simultaneously and to roll back to a prior version relatively easily.</p>
<p>Having an architecture composed of small services is a response to having small teams.  The DevOps goal of minimizing coordination among various teams can be achieved by using a microservice architectural style where the coordindation mechanism, the resource management decisions and the mapping of architectural elements are all specified by the architecture and hence require minimal inter-team coordination.</p>
<p>A collection of practices for development can be added to the microservice architectural style to achieve dependency and modifiability, such as identifying and isolating areas of likely change.</p>
<p>Adopting a microservice architectural style introduces additional challenges in a monitoring, debugging performance management and testing. Migrating from an existing architecture to a microservice architectural style requires careful planning and commitment.</p>
<h2>Release/Deploy</h2>
<p>The Deploy adoption path is where most of the root capabilities of DevOps originated. Continuous release and deployment take the concept of continuous integration to the next step. The practice that enables release and deploy also enables the creation of a delivery pipeline. This pipeline facilitates continuous deployment of software to QA and then to production in an efficient, automated manner. The goal of continuous release and deployment is to release new features to customers and users as soon as possible.</p>
<h2>Operate/Monitor</h2>
<p>The Operate adoption path includes two practices that allow businesses to monitor how released applications are performing in production and to receive feedback from customers. This data allows the businesses to react in an agile manner and change their business plans as necessary.</p>
<p><strong><u>Continuous Monitoring</u></strong></p>
<p>Continuous monitoring provides data and metrics to operations, QA, development, lines-of-business personnel, and other stakeholders about applications at different stages of the delivery cycle. These metrics aren’t limited to production. Such metrics allow stakeholders to react by enhancing or changing the features being delivered and/or the business plans required to deliver them.</p>
<p><strong><u>Continuous Customer Feedback and Optimization</u></strong></p>
<p>The two most important types of information that a software delivery team can get are data about how customers use the application and feedback that those customers provide upon using the application. New technologies allow businesses to capture customer behavior and customer pain points right as they use the application. This feedback allows different stakeholders to take appropriate actions to improve the applications and enhance customer experience. Lines of business may adjust their business plans, development may adjust the capabilities it delivers, and operations may enhance the environment in which the application is deployed. This continuous feedback loop is an essential component of DevOps, allowing businesses to be more agile and responsive to customer needs.</p>
<p>The post <a href="https://effectivepmc.net/blog/continuous-delivery/">Continuous Delivery</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>DevOps Toolchain</title>
		<link>https://effectivepmc.net/blog/devops-toolchain/</link>
		
		<dc:creator><![CDATA[Amit Kulkarni]]></dc:creator>
		<pubDate>Sun, 18 Mar 2018 18:14:11 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[CI/CD]]></category>
		<category><![CDATA[devops]]></category>
		<guid isPermaLink="false">https://effectivepmc.net/?p=1628</guid>

					<description><![CDATA[<p>DevOps Toolchain DevOps is all about how businesses must be results-oriented to meet evolving customer demand and gain greater market share. Today’s marketplace is connected, always on and increasingly competitive. Companies are adopting what we call an “as-a-service” approach to achieve better outcomes fast, consuming and leveraging leading-edge technologies such as cloud and automation. As [&#8230;]</p>
<p>The post <a href="https://effectivepmc.net/blog/devops-toolchain/">DevOps Toolchain</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>DevOps Toolchain</h1>
<p><a href="https://effectivepmc.net/blog/what-is-devops/">DevOps</a> is all about how businesses must be results-oriented to meet evolving customer demand and gain greater market share. Today’s marketplace is connected, always on and increasingly competitive. Companies are adopting what we call an “as-a-service” approach to achieve better outcomes fast, consuming and leveraging leading-edge technologies such as cloud and automation.</p>
<p>As IT systems grow exponentially, and cloud solutions proliferate, non-automated, manual systems increasingly are becoming a major business liability. Today’s systems are simply becoming too big and complex to run completely manually, and working without automation is largely unsustainable for many enterprises across all industries.</p>
<p>Automation involves a set of tools, processes and insights that allows IT environments to self-modify and adjust, and some enterprises have started using intelligent automation to drive a new, more productive relationship between people and machines.</p>
<p>For example, IT automation is often used to auto-scale and load-balance large fleets of servers, manage global content distribution based on geographic demand, enable self-healing of IT systems and manage security mostly with limited ongoing manual intervention.</p>
<p>Moreover, automation enables the ability to adapt and improve the service experience without manual intervention. However, while these tools offer new strengths and capabilities, they are meant to complement and enhance human skills.</p>
<p>Effective automation depends on adequate insights collected from all the systems relevant to the service experience and business outcome you’re trying to augment. Insights from data is necessary to create opens paths to automated predictions and ultimately using machine learning, or artificial intelligence, as part of a full scope the as-a-service construct.</p>
<p>Specific insights known as telemetry allows signals to be harvested and interpreted so automation can better adjust production systems to maintain a healthy business. The insight gathered from such analytics allows automation to validate and compose modification rules. For example, sensors that detect a supply chain issue could automatically reroute or fine-tune related functions, such as dispatch or logistics, to solve or generate a workaround for the issue. The business flow can adapt and realign automatically with the ultimate goal of improving the customer experience.</p>
<p><strong><u>Automation Creates High Resiliency</u></strong></p>
<p>Two common business outcomes that depend on efficient automation are highly resilient systems and experimentation platforms.</p>
<p>Highly resilient systems include automation that can detect, avoid, heal and remediate any deviations from normal, healthy business function. To detect deviations, automation capabilities need to understand what the “steady state” of the system is and what constitutes the “health” of the system under varying conditions. For each detected deviation from an established steady state, a specific automation is triggered that attempts to return the system back to the steady state.</p>
<p>The best way to determine if resiliency automation works effectively is through a process known as “fault injection.” Highly resilient systems run under constant fire drills in which operations insert faults into the system while developers continuously build responding resiliency automation.</p>
<p><strong><u>Automation Creates Higher Degree of Experimentation</u></strong></p>
<p>Automation also can provide a higher degree of experimentation and increase agility, two key attributes of as-a-service economy. Automatically provisioning a component such as a virtual machine, for example, is only a piece of the puzzle since automation is most valuable when it contributes to improving a customer experience or delivering a business outcome.</p>
<p>A platform that’s constantly testing, experimenting and developing allows companies to try new ideas in production quickly without fear of failure or outage. When confidence in system resiliency is high, it allows businesses to test new things directly in production (A/B testing). If an experiment fails, there is no harm done as automation returns the system to steady state. If an experiment succeeds, it is quickly absorbed into the production itself.</p>
<p>A fast, efficient experimentation platform enables businesses to react faster to failures and successes—and pivot accordingly without excess wasted resources. For example, a retail company might change a shopping basket feature for 1 percent of its customers. With constant measurement and instrumentation, the company can automatically derive insights, determine if the change is effective and create a chain of automated reactions. If, say, the demand spikes for a new offering based on a limited customer pilot, the system can reset stocking levels ahead of geographic or further customer segment rollout. This ability increases a company’s agility and adaptability, improving the customer experience and delivering on the most important factors determining success in today’s as-a-service business environment.</p>
<h2>Use Of Tools to Facilitate Devops</h2>
<p>Tools are inherent to our jobs, inherent to how we solve the problems we face each day. Our comfort level with the set of tools that are available to us, and our ability to adapt to new tools as they evolve and shape our thoughts and ideas. The availability of collective knowledge within the palm of your hand combined with the collaboration across organization and company boundaries through open source software is dramatically disrupting the status quo of work. Companies mired in managing infrastructure configuration management by hand with unknown numbers of divergent systems, unable to quickly change and respond to market demands will struggle against their counterparts who have managed to contain their complexity on one axis through infrastructure automation. While it is possible to manage servers by hand, or even artisinally crafted shell scripts, a proper configuration management tool is</p>
<p>invaluable especially as your environment and team changes.</p>
<p>Even the best software developers will struggle if they are working in an environment without a version control system in place. Tools matter in that not having them, or using them incorrectly, can destroy the effectiveness of even the most intelligent and empathetic of engineers. The consideration you give to the tools you use in your organization will reflect in the overall organization’s success. You’ll find that what is a good tool for some teams might not be a good one for others. The strength of tools comes from how well they fit the needs of the the people or groups using them. If you don’t need feature X, its presence won’t be a selling point when considering which tool your organization should use. Especially in larger organizations with teams numbering in the dozens, finding one tool that meets the needs of every team will be increasingly difficult. You will have to strike a balance between deciding on one tool that will be used across the entire company consistently and allowing more freedom of choice among individual teams. There are benefits to both the consistency and manageability that comes from having only one tool in use in an organization, and also from allowing teams to pick specific tools that work best for then.</p>
<p>Because DevOps is a cultural shift and collaboration (between development, operations and testing), there is no single &#8220;DevOps tool&#8221;: it is rather a set (or &#8220;DevOps toolchain&#8221;), consisting of multiple tools in the Delivery and Deployment pipelines. Generally, DevOps tools fit into one or more of these categories, which is reflective of the software development and delivery process:</p>
<ul>
<li>Plan – Plan testing strategy, CI/CI Strategy, Choice of Tools etc</li>
<li>Code — Code development and review, version controltools, code merging;</li>
<li>Build — Continuous integrationtools, build status;</li>
<li>Test — Test and results determine performance;</li>
<li>Release — Change management, release approvals, release automation;</li>
<li>Deploy — Infrastructure configuration and management, Infrastructure–as–Code tools;</li>
<li>Operate and Monitor — Applications performance monitoring, end–user experience.</li>
</ul>
<p>Though there are many tools available, certain categories of them are essential in the DevOps toolchain setup for use in an organization.</p>
<p>Tools such as Docker (containerization), Jenkins (continuous integration), Puppet (Infrastructure-as-Code) and Vagrant (virtualization platform)—among many others—are often used and frequently referenced in DevOps tooling discussions.</p>
<p>Typical stages in a DevOps toolchain looks like this</p>
<p><a href="https://effectivepmc.net/wp-content/uploads/2018/03/DevOpsToolchain.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-1630" src="https://effectivepmc.net/wp-content/uploads/2018/03/DevOpsToolchain.png" alt="DevOps Toolchain" width="628" height="472" /></a></p>

<p><strong>Please refer to the following links to know more about DevOps Tools</strong></p>



<ul class="wp-block-list">
<li><a href="https://effectivepmc.net/blog/jenkins-tool/">DevOps Tool – Jenkins</a></li>
<li><a href="https://effectivepmc.net/blog/devops-tool-cucumber/">DevOps Tool – Cucumber</a></li>
</ul>
<p>The post <a href="https://effectivepmc.net/blog/devops-toolchain/">DevOps Toolchain</a> appeared first on <a href="https://effectivepmc.net">World Of Agile</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
