Releasing software frequently to users is usually time-consuming and painful process. Continuous Integration and Continuous Delivery can help organizations to become more agile by automating and streamlining steps involved in going from an idea, change in the market and business requirement to the delivered product to the customer.
Jenkins has been a center-piece for CI and with the introduction of Pipeline Jenkins plugin, it has become popular tool for building Continuous Delivery pipelines that not only builds and tests the code changes but also pushes change through various steps required to make sure the change is ready for release in upper environments like UAT and Stage.
CI/CD is one of the popular use-cases for OpenShift Container Platform. OpenShift provides a certified Jenkins container for building Continuous Delivery pipelines and scales the pipeline execution through on-demand provisioning of Jenkins slaves in containers. This allows Jenkins to run many jobs in parallel and removes the wait time for running builds in large projects. OpenShift provides an end-to-end solution for building complete deployment pipelines and enables the necessarily automation required for managing code and configuration changes through the pipeline out-of-the-box.
Tools required to set up a CI/CD infrastructure on OpenShift:
Continuous Integration is a development practice in which the developers are required to commit changes to the source code in a shared repository several times a day or more frequently. Every commit made in the repository is then built. This allows the teams to detect the problems early. Apart from this, depending on the Continuous Integration tool, there are several other functions like deploying the build application on the test server, providing the concerned teams with the build and test results etc.
Let us imagine a scenario where the complete source code of the application was built and then deployed on test server for testing. It sounds like a perfect way to develop a software, but, this process has many flaws. I will try to explain them one by one:
It is evident from the above stated problems that not only the software delivery process became slow but the quality of software also went down. This leads to customer dissatisfaction. So to overcome such a chaos there was a dire need for a system to exist where developers can continuously trigger a build and test for every change made in the source code. This is what CI is all about.
In Traditional Integration or/software development cycle,
The main factors that can make these problems escalate:
Continuous Integration is the most important part of DevOps that is used to integrate various DevOps stages. Jenkins is the most famous Continuous Integration tool. Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.
A pipeline stage is a logically grouped set of tasks intended to achieve a specific function within a pipeline (e.g. Build the App, Deploy the App, Test the App, Promote the App). The pipeline succeeds when all stages have completed without failure. Typically, stages run serially (one after the other) and in a consistent order, but some may run in parallel. We refer to the movement from one stage to the next as triggering. The ultimate goal for a successful pipeline is that it is able to run all the way through automatically (automatic triggering), taking the workload all the way into a production state without any intervention my humans. This level of continuation allows for development teams to release small amounts of code quickly and with low risk.
To achieve a pipeline with this level of capabilities requires a high level of investment on the part of the development and operations teams to build proper testing and validation into the automated pipeline. This ensures quality and compliance of the code before deploying it into production. For this reason, many pipelines initially include manual triggers — stops or pauses after certain stages, requiring manual intervention to run tests, review code, or receive sign-off before approving the pipeline to continue on to a higher stage.
There is a continuum between strictly manually triggered pipelines and automatically triggered pipelines. Most organizations may begin with full manual triggering between stages, but should look to remove as many of those manual triggers as feasible in order to reduce bottlenecks in the system.
Both OpenShift and Jenkins provide methods to trigger builds and deployments. Centralizing most of the workflow triggers through Jenkins reduces the complexity of understanding deployments and why they have occurred. The pipelines buildconfigs are created in OpenShift. The OpenShift sync plugin ensures Jenkins has the same pipelines defined. This simplifies Jenkins pipeline bootstrapping. Pipeline builds may be initiated from either OpenShift or Jenkins.
• Email us – contact@click2cloud.net
This blog summarizes how you can readily use integrated pipelines with your OpenShift projects. Automating each gate and step in a pipeline allows you to visibly feed back the results of your activities to teams, allowing you to react fast when failures occur. The ability to continually iterate what you put in your pipeline is a great way to deliver quality software fast. Use pipeline capabilities to easily create container applications on demand for all of your build, test, and deployment requirements.