<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[DevOps Community]]></title><description><![CDATA[The DevOps/SRE community is for those folks who are trying to learn or explore DevOps with the help of experienced professionals. Opportunities are open to shar]]></description><link>https://blog.devopscommunity.in</link><generator>RSS for Node</generator><lastBuildDate>Sat, 11 Apr 2026 14:02:06 GMT</lastBuildDate><atom:link href="https://blog.devopscommunity.in/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Developer Relations (DevRel) WhitePaper]]></title><description><![CDATA[Learn DevRel today!

1. Introduction to Developer Relations (DevRel)
Developer Relations, commonly known as DevRel, is a multifaceted discipline that serves as a crucial bridge between technology companies and the developer community. It encompasses ...]]></description><link>https://blog.devopscommunity.in/developer-relations-devrel-whitepaper</link><guid isPermaLink="true">https://blog.devopscommunity.in/developer-relations-devrel-whitepaper</guid><category><![CDATA[Developer]]></category><category><![CDATA[development]]></category><category><![CDATA[devtools]]></category><category><![CDATA[DevRel]]></category><category><![CDATA[developer relations]]></category><dc:creator><![CDATA[Rohit Ghumare]]></dc:creator><pubDate>Fri, 26 Jul 2024 11:43:36 GMT</pubDate><content:encoded><![CDATA[<p>Learn DevRel today!</p>
<p><img src="https://github.com/rohitg00/devrel-whitepaper/blob/main/devrel_flowchart_avocado.png?raw=true" alt="devrel_flowchart_avocado.png" /></p>
<h2 id="heading-1-introduction-to-developer-relations-devrel">1. Introduction to Developer Relations (DevRel)</h2>
<p>Developer Relations, commonly known as DevRel, is a multifaceted discipline that serves as a crucial bridge between technology companies and the developer community. It encompasses a wide range of activities aimed at fostering positive relationships with developers, promoting products or platforms, and gathering valuable feedback to improve the overall developer experience.</p>
<h3 id="heading-definition-and-scope-of-devrel">Definition and scope of DevRel</h3>
<p>DevRel is the practice of building and maintaining relationships between a company and its developer community. It involves creating and nurturing a thriving ecosystem around a company's products, APIs, or platforms. The scope of DevRel extends beyond traditional marketing or support roles, as it requires a deep understanding of both technical concepts and community dynamics.</p>
<p>Key aspects of DevRel include:</p>
<ul>
<li><p>Developer advocacy and evangelism</p>
</li>
<li><p>Technical content creation and documentation</p>
</li>
<li><p>Community management and engagement</p>
</li>
<li><p>Product feedback collection and analysis</p>
</li>
<li><p>Developer education and support</p>
</li>
</ul>
<h3 id="heading-the-evolution-of-devrel-in-the-tech-industry">The evolution of DevRel in the tech industry</h3>
<p>The concept of DevRel has evolved significantly over the past few decades:</p>
<ol>
<li><p>Early days (1980s-1990s): Initially, it emerged as a way for companies to provide technical support to developers using their products. This was often limited to documentation and basic customer service.</p>
</li>
<li><p>Web 2.0 era (2000s): With the rise of social media and online communities, DevRel began to focus more on community building and developer engagement.</p>
</li>
<li><p>API economy (2010s): As APIs became central to software development, DevRel expanded to include API evangelism and education.</p>
</li>
<li><p>Cloud and open-source era (2010s-present): The growth of cloud computing and open-source software has made DevRel crucial for companies looking to build and maintain developer ecosystems.</p>
</li>
</ol>
<p>Today, DevRel professionals play a vital role in shaping product strategy, driving adoption, and fostering innovation within the developer ecosystem. They serve as the voice of the developer within their organizations and help companies build products that truly meet the needs of their target audience.</p>
<p>As we delve deeper into this whitepaper, we will explore the various aspects of DevRel, including the roles and responsibilities of DevRel professionals, the skills required for success in this field, and the impact of DevRel on business growth and ROI. We'll also examine case studies, best practices, and future trends in this rapidly evolving field.</p>
<h2 id="heading-2-the-role-of-devrel-professionals">2. The Role of DevRel Professionals</h2>
<p>DevRel professionals wear many hats and play a crucial role in bridging the gap between developers and the companies that create tools, platforms, and services for them. Their work is essential in building and maintaining a healthy, engaged developer community.</p>
<h3 id="heading-key-responsibilities-and-tasks">Key responsibilities and tasks</h3>
<ol>
<li><p>Developer Advocacy:</p>
<ul>
<li><p>Representing the company at conferences, meetups, and online events</p>
</li>
<li><p>Creating and delivering technical presentations and demos</p>
</li>
<li><p>Writing blog posts, articles, and tutorials</p>
</li>
<li><p>Engaging with developers on social media and forums</p>
</li>
</ul>
</li>
<li><p>Product Feedback and Improvement:</p>
<ul>
<li><p>Gathering feedback from developers on products and services</p>
</li>
<li><p>Communicating developer needs and pain points to internal teams</p>
</li>
<li><p>Collaborating with product managers and engineers to improve developer experience</p>
</li>
</ul>
</li>
<li><p>Documentation and Content Creation:</p>
<ul>
<li><p>Writing and maintaining technical documentation</p>
</li>
<li><p>Creating sample code, SDKs, and starter projects</p>
</li>
<li><p>Producing video tutorials and webinars</p>
</li>
</ul>
</li>
<li><p>Community Management:</p>
<ul>
<li><p>Building and nurturing developer communities (online and offline)</p>
</li>
<li><p>Organizing and running hackathons, workshops, and other events</p>
</li>
<li><p>Managing developer programs and initiatives</p>
</li>
</ul>
</li>
<li><p>Technical Support:</p>
<ul>
<li><p>Providing high-level technical support to developers</p>
</li>
<li><p>Creating and maintaining FAQs and troubleshooting guides</p>
</li>
<li><p>Escalating complex issues to engineering teams</p>
</li>
</ul>
</li>
<li><p>Strategy and Metrics:</p>
<ul>
<li><p>Developing and implementing DevRel strategies</p>
</li>
<li><p>Tracking and analyzing key performance indicators (KPIs)</p>
</li>
<li><p>Reporting on the impact of DevRel activities to stakeholders</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-the-diverse-backgrounds-of-devrel-professionals">The diverse backgrounds of DevRel professionals</h3>
<p>DevRel professionals come from a variety of backgrounds, reflecting the multidisciplinary nature of the role:</p>
<ol>
<li><p>Software Development: Many DevRel professionals have a background in software engineering, which provides them with the technical knowledge needed to understand and communicate complex concepts.</p>
</li>
<li><p>Technical Writing: Those with experience in technical writing bring strong communication skills and the ability to create clear, concise documentation.</p>
</li>
<li><p>Marketing and Communications: Professionals from marketing backgrounds contribute expertise in messaging, branding, and community engagement.</p>
</li>
<li><p>Product Management: Experience in product management can be valuable for understanding the product lifecycle and bridging the gap between developers and internal teams.</p>
</li>
<li><p>Customer Support: A background in technical support provides insights into common developer challenges and effective problem-solving techniques.</p>
</li>
<li><p>Education: Those with teaching experience bring strong presentation skills and the ability to explain complex topics to diverse audiences.</p>
</li>
</ol>
<p>The diversity of backgrounds in DevRel reflects the multifaceted nature of the role and the importance of having a team with complementary skills and experiences.</p>
<h2 id="heading-3-skills-and-qualities-for-success-in-devrel">3. Skills and Qualities for Success in DevRel</h2>
<p>Success in Developer Relations requires a unique blend of technical knowledge, communication skills, and interpersonal abilities. DevRel professionals must be able to understand complex technical concepts, explain them clearly to diverse audiences, and build relationships with both developers and internal stakeholders.</p>
<h3 id="heading-essential-skills-for-devrel-professionals">Essential skills for DevRel professionals</h3>
<ol>
<li><p>Technical Proficiency:</p>
<ul>
<li><p>Strong programming skills and familiarity with multiple languages and frameworks</p>
</li>
<li><p>Understanding of software development processes and best practices</p>
</li>
<li><p>Knowledge of API design and usage</p>
</li>
<li><p>Familiarity with cloud platforms and DevOps practices</p>
</li>
</ul>
</li>
</ol>
<h1 id="heading-to-learn-a-full-version-of-this-whitepaper-check-the-link-below-httpsgithubcomrohitg00devrel-whitepaperhttpsgithubcomrohitg00devrel-whitepapertreemain">To learn a full version of this whitepaper, Check the link below: <a target="_blank" href="https://github.com/rohitg00/devrel-whitepaper/tree/main">https://github.com/rohitg00/devrel-whitepaper/</a></h1>
]]></content:encoded></item><item><title><![CDATA[Docker Commands Cheatsheet]]></title><description><![CDATA[Docker Commands Cheat Sheet
Container Lifecycle

Creating and Starting a Container

docker run -d --name my_container nginx- Use this command to create and start a container named my_container using the nginx image.


Starting, Stopping, and Restarti...]]></description><link>https://blog.devopscommunity.in/docker-commands-cheatsheet</link><guid isPermaLink="true">https://blog.devopscommunity.in/docker-commands-cheatsheet</guid><category><![CDATA[Docker]]></category><category><![CDATA[docker images]]></category><category><![CDATA[Docker compose]]></category><category><![CDATA[Dockerfile]]></category><category><![CDATA[documentation]]></category><category><![CDATA[command line]]></category><category><![CDATA[cheatsheet]]></category><dc:creator><![CDATA[Rohit Ghumare]]></dc:creator><pubDate>Tue, 07 May 2024 10:53:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715079157225/fd7d6a23-a41a-4910-84de-967457d99f19.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715079066163/729111be-d479-4c8e-8174-0b0ff81354ee.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-docker-commands-cheat-sheet"><strong>Docker Commands Cheat Sheet</strong></h3>
<h4 id="heading-container-lifecycle"><strong>Container Lifecycle</strong></h4>
<ul>
<li><p><strong>Creating and Starting a Container</strong></p>
<ul>
<li><code>docker run -d --name my_container nginx</code>- Use this command to create and start a container named <code>my_container</code> using the <code>nginx</code> image.</li>
</ul>
</li>
<li><p><strong>Starting, Stopping, and Restarting Containers</strong></p>
<ul>
<li><p><code>docker stop my_container</code> - Stops the specified container.</p>
</li>
<li><p><code>docker start my_container</code> - Starts a previously stopped container.</p>
</li>
<li><p><code>docker restart my_container</code> - Restarts a container.</p>
</li>
</ul>
</li>
<li><p><strong>Listing Containers</strong></p>
<ul>
<li><p><code>docker ps</code> - Lists all currently running containers.</p>
</li>
<li><p><code>docker ps -a</code> - Lists all containers, including those that are stopped.</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-image-management"><strong>Image Management</strong></h4>
<ul>
<li><p><strong>Fetching an Image from a Registry</strong></p>
<ul>
<li><code>docker pull ubuntu</code>- Retrieves the <code>ubuntu</code> image from the Docker Hub or other specified registry.</li>
</ul>
</li>
<li><p><strong>Building an Image from a Dockerfile</strong></p>
<ul>
<li><code>docker build -t my_image .</code>- Builds an image named <code>my_image</code> from a Dockerfile in the current directory.</li>
</ul>
</li>
<li><p><strong>Listing and Removing Images</strong></p>
<ul>
<li><p><code>docker images</code> - Lists all local Docker images.</p>
</li>
<li><p><code>docker rmi my_image</code> - Removes the specified image from the local system.</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-container-operations"><strong>Container Operations</strong></h4>
<ul>
<li><p><strong>Interacting with Containers</strong></p>
<ul>
<li><p><code>docker exec -it my_container bash</code> - Executes a command (<code>bash</code> in this case) inside a running container.</p>
</li>
<li><p><code>docker attach my_container</code> - Attaches to a running container for interaction.</p>
</li>
<li><p><code>docker logs my_container</code> - Displays logs from the specified container.</p>
</li>
</ul>
</li>
<li><p><strong>Managing Container Resources</strong></p>
<ul>
<li><p><code>docker cp file.txt my_container:/path/to/destination</code> - Copies files between a container and the host system.</p>
</li>
<li><p><code>docker pause my_container</code> and <code>docker unpause my_container</code> - Pauses or unpauses a running container.</p>
</li>
<li><p><code>docker inspect my_container</code> - Displays detailed information about a specific container.</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-networking"><strong>Networking</strong></h4>
<ul>
<li><p><strong>Network Management</strong></p>
<ul>
<li><p><code>docker network ls</code> - Lists all networks available on the Docker host.</p>
</li>
<li><p><code>docker network create my_network</code> - Creates a new network.</p>
</li>
<li><p><code>docker network connect my_network my_container</code> and <code>docker network disconnect my_network my_container</code> - Connects or disconnects a container from a network.</p>
</li>
</ul>
</li>
</ul>
<h4 id="heading-volume-management"><strong>Volume Management</strong></h4>
<ul>
<li><p><strong>Volume Commands</strong></p>
<ul>
<li><p><code>docker volume ls</code> - Lists all Docker volumes.</p>
</li>
<li><p><code>docker volume create my_volume</code> - Creates a new volume named <code>my_volume</code>.</p>
</li>
<li><p><code>docker volume rm my_volume</code> - Removes the specified volume.</p>
</li>
<li><p><code>docker volume inspect my_volume</code> - Provides detailed information about a specific volume.</p>
</li>
</ul>
</li>
</ul>
<p>Follow <a class="user-mention" href="https://hashnode.com/@rohitg00">Rohit Ghumare</a> for more such an amazing content.</p>
<p>Check <a target="_blank" href="http://devopscommunity.in">DevOpscommunity.in</a> for more FREE resources.</p>
]]></content:encoded></item><item><title><![CDATA[WordPress deployment with configuration as a code with Configu open source project]]></title><description><![CDATA[Introduction
Configuration-as-Code(CaC) is a common and powerful buzzword in DevOps and Cloud Native. Rather than manually creating configurations from Graphical User Interface(GUI), It allows managing your system’s configuration settings by presenti...]]></description><link>https://blog.devopscommunity.in/wordpress-deployment-with-configuration-as-a-code-with-configu-open-source-project</link><guid isPermaLink="true">https://blog.devopscommunity.in/wordpress-deployment-with-configuration-as-a-code-with-configu-open-source-project</guid><category><![CDATA[Devops]]></category><category><![CDATA[development]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Rohit Ghumare]]></dc:creator><pubDate>Tue, 31 Oct 2023 07:16:57 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://cdn-images-1.medium.com/max/1600/0*JHVGTFS4WZt-Zs5U" alt /></p>
<h3 id="heading-introduction">Introduction</h3>
<p>Configuration-as-Code(CaC) is a common and powerful buzzword in DevOps and Cloud Native. Rather than manually creating configurations from Graphical User Interface(GUI), It allows managing your system’s configuration settings by presenting a set of instructions to them in the form of text or code scripts. CaC is generally used to perform tasks such as server setup, application configurations, and network settings.</p>
<p>Let’s simplify this! Imagine you have a mechanical robot at home and want to tell it what to do. What are the best ways to inform them? Instead of using spoken words or buttons, you can write down the set of instructions on a sheet of paper. Standard instructions are as follows: “Turn Right, Turn Left, Move forward ten steps.”</p>
<p>Similarly, Configuration as code is like giving instructions to your robot. But instead of a robot, we’re writing them for our systems as programs and code. The best thing is they’re reusable, automated, tested, and consistent. Furthermore, version control allows you to switch from the current set of instructions to the previous or next one.</p>
<h3 id="heading-why-do-i-love-configuration-as-code">Why do I love Configuration as Code?</h3>
<ol>
<li><p><strong>Version Control:</strong> I’m guessing you’re familiar with version control tools like Git and SCM; you can store your configuration settings as code in their repositories. This will allow you to track the changes, collaborate with teams, and roll back to previous versions if something goes wrong, just like you manage code changes with Git.</p>
</li>
<li><p><strong>Automation:</strong> CaC saves a lot of time and reduces the chance of misconfiguration by automation simply by automating the providing and Configuration of servers, containers, or cloud resources by running your configuration scripts;</p>
</li>
<li><p><strong>Testing:</strong> I hope you have written some unit tests for your code. Similarly, you can write tests for your configuration code to ensure that it meets requirements that help to catch issues early in the deployment process.</p>
</li>
<li><p><strong>Reusability:</strong> The “Write once, deploy anywhere” approach allows you to reuse your configuration code for different environments, simplifying your deployments.</p>
</li>
<li><p><strong>Scalability:</strong> Your company deals with new projects; manually managing configurations becomes hectic as infrastructure grows. CaC allows you to quickly scale by applying the same code to new instances, making it efficient to manage many resources.</p>
</li>
</ol>
<h3 id="heading-introducing-configu-orchestrator-and-configu-platform">Introducing Configu Orchestrator and Configu Platform</h3>
<p><a target="_blank" href="https://github.com/configu/configu/"><strong>Configu Orchestrator</strong></a> is an Open source software for streamlining, testing, and automating application configurations across environments, i.e., Configu Orchestrator is a standalone open-source tool for configuring applications.</p>
<ul>
<li><p>Configu offers “configuration-as-code”, which is a concept that allows developers to declare configurations as code, making them easier to manage, test, and automate.</p>
</li>
<li><p>Configu Orchestrator is an open-source tool designed to help manage application configurations across different environments. It allows developers to declare configurations as code (using <code>.cfgu.json</code> files), making them easier to handle, test, and automate. This is particularly useful in complex, distributed environments where configurations can be scattered across multiple sources.</p>
</li>
<li><p>With the emergence of dynamic and distributed microservices architectures, application configurations are becoming more fragmented. Configurations are usually stored as plain text and are scattered across various sources, such as multiple stores, databases, files, git repositories, and third-party tools.</p>
</li>
<li><p>It is not uncommon for a company to have multiple stores for configurations. The <a target="_blank" href="https://github.com/configu/configu"><strong>Configu Orchestrator</strong></a>, which is an open-source software, is a powerful standalone tool designed to address this challenge.</p>
</li>
</ul>
<blockquote>
<p><a target="_blank" href="http://configu.com"><strong>Configu platform</strong></a> allows users to manage configurations from development to production, including testing, security, and tracking.</p>
</blockquote>
<ul>
<li><p><strong>Configu Platform</strong> is a comprehensive solution for managing configurations throughout the software development lifecycle, from development to production. It also integrates with various tools and services, making it a one-stop solution for configuration management. The platform aims to enhance productivity, collaboration, security, and configuration control.</p>
</li>
<li><p>It also offers integration with various tools and services such as CI/CD, runtime environments, PaaS, and cloud service providers, making it a one-stop shop for all configuration needs.</p>
</li>
<li><p>Additionally, the platform promises to boost productivity, increase collaboration, remove bottlenecks, gain better visibility, advance security and compliance, ensure high reliability, and better control over access and permissions.</p>
</li>
<li><p>Configu is a u<em>seful</em> tool for managing and collaborating on software configurations. It provides a <em>flexible</em> and adap<em>table</em> process that’s importan<em>t in today’s ever-changing tech</em> lands<em>cape. It’s great t</em>o see a solution that can simplify things across different environments and systems.</p>
</li>
<li><p>Configu is a versatile system streamlines the integration process with existing workflows and systems. It can be tailored to meet the unique requirements of different teams and projects.</p>
</li>
<li><p>The system boasts a robust abstraction layer that enhances teams’ capacity to execute secure and error-free configuration operations during the software development lifecycle.</p>
</li>
</ul>
<h3 id="heading-challenges-with-wordpress-deployment">Challenges with WordPress Deployment</h3>
<p>It’s important to understand that <strong>WordPress</strong> is a web application that requires a database to store its contents, user data, and other settings. Additionally, WordPress relies on PHP as a programming language to handle requests and generate HTML pages. Therefore, when setting up WordPress using Docker Compose, we must consider these dependencies and configure them properly.</p>
<p>Let’s say we have two different environments: staging and production, each with its own set of configuration files and services. The staging environment might have a simpler configuration file that allows us to easily test changes before deploying them to production. In contrast, the production environment may require stricter security measures and additional services like load balancing and firewall rules.</p>
<p>Furthermore, let’s assume that our WordPress installation has several plugins and themes, each with its own unique requirements for PHP versions, error reporting levels, and caching mechanisms. We may also have some legacy code that needs to run alongside modern frameworks like Laravel or Symfony. To simplify matters, our team comprises developers who prefer different IDEs, debugging tools, and workflows.</p>
<p>Imagine we want to deploy this entire stack across multiple clouds, such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Each cloud provider offers its own set of services and toolsets for configuring and managing resources. This means we must consider factors like instance types, network topology, storage solutions, and access control mechanisms specific to each cloud platform.</p>
<p>To tackle this level of complexity, we will utilize an open-source configu project and its capabilities to set environment variables to define the various services and their associated configurations.</p>
<p>Give them a <a target="_blank" href="https://github.com/configu/configu/">star</a>(⭐️) or even contribute. (<strong>Hacktoberfest</strong> is a great time to start your OSS journey.)</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/configu/configu/">https://github.com/configu/configu/</a></div>
<p> </p>
<h3 id="heading-tutorial-demo-wordpress-deployment">Tutorial Demo — WordPress Deployment</h3>
<p>Let’s get our hands dirty by trying out configu with real-world use cases. One popular use-case around Individual developers and companies is deployment for WordPress Applications. WordPress was launched as a blog hosting and creation platform, but today, It’s widely adopted as a home of websites and blogs.</p>
<p>We will use the power of the Configu open-source project for configuration as a code to define secrets, variables, and necessary configurations, along with <code>docker-compose</code> deploying WordPress faster than ever. Before diving into the hands-on part, checking out the official <a target="_blank" href="https://configu.com/docs/get-started/">Getting Started</a> documentation provided by Configu is highly recommended.</p>
<blockquote>
<p>💡 Note: We’re using MacOS for this demo; if you have any other system, install the necessary dependencies.</p>
</blockquote>
<h3 id="heading-pre-requisites">Pre-requisites</h3>
<ul>
<li><p><a target="_blank" href="http://app.configu.com"><strong>Configu Account</strong></a></p>
</li>
<li><p><a target="_blank" href="https://www.notion.so/Configuration-as-a-code-with-Configu-open-source-project-dd2042e4f82249668900a58e989d56dc?pvs=21"><strong>Docker</strong></a></p>
</li>
<li><p><a target="_blank" href="https://docs.docker.com/compose/install/"><strong>Docker Compose</strong></a></p>
</li>
</ul>
<h3 id="heading-setting-up-the-configu-cli-on-your-local-system">Setting Up the Configu CLI on your local system</h3>
<ul>
<li><strong>Install the Configu CLI (OSS)</strong></li>
</ul>
<pre><code class="lang-plaintext">curl &lt;https://cli.configu.com/install.sh&gt; | sh
</code></pre>
<blockquote>
<p>💡 You can use the configu orchestrator open-source with any config store you need. We’ll use the Configu Platform in this example because I love it.</p>
</blockquote>
<p><strong>Login to Configu Platform</strong></p>
<pre><code class="lang-plaintext">configu login
</code></pre>
<p><strong>Expected Output:</strong></p>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*PMVX8OGFYMOYQtyk" alt /></p>
<p>You must accept the authentication from your browser; for that, you must log in using your Github, Google, or any feasible platform. Once you receive the below output, you’re good to go.</p>
<blockquote>
<p>💡 Configu has an <a target="_blank" href="https://configu.com/docs/cli-setup/#install-on-cicd">interactive login</a> if needed (for CI/CD processes or other non-interactive users, you can also log in using a token if required)</p>
</blockquote>
<p><img src="https://cdn-images-1.medium.com/max/1200/0*PKZfdckfZymScyZL" alt /></p>
<p><img src="https://cdn-images-1.medium.com/max/1200/0*qEmgO82-BGX3CU7y" alt /></p>
<h3 id="heading-wordpress-installation-using-docker-compose">WordPress Installation using Docker Compose</h3>
<ul>
<li>Verify if you have already installed <code>docker-compose</code> on your system. If yes, then run the below command to make it executable for better use.</li>
</ul>
<pre><code class="lang-plaintext">% docker-compose --version
Docker Compose version v2.19.1
% sudo chmod +x /usr/local/bin/docker-compose
</code></pre>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*8MeD_0mAjxgqEIv1" alt /></p>
<ul>
<li>Set up a project directory to organize your WordPress installation and related files. Navigate to your desired project location in the terminal. Create a new directory with the command:</li>
</ul>
<pre><code class="lang-plaintext">% mkdir wordpress-project
% cd wordpress-project
</code></pre>
<ul>
<li><p>We will now create a <code>docker-compose.yaml</code> file for the WordPress Installation using Docker Compose. This file will contain the necessary service and configuration required for WordPress Installation; hence, define them using your favorite code editor.</p>
</li>
<li><p>You can use the below file for this demo. This file contains WordPress image and MySQL image as a database and environment variables that we will define using the configu open source project.</p>
</li>
</ul>
<p><strong>Without Configu:</strong></p>
<pre><code class="lang-plaintext">version: '3'
services:
  db:
    image: mysql:8
    volumes:
      - db_data:/var/lib/mysql
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
wordpress:
    depends_on:
      - db
    image: wordpress:latest
    ports:
      - 8000:80
    restart: always
    environment:
      WORDPRESS_DB_HOST: ${WORDPRESS_DB_HOST}
      WORDPRESS_DB_USER: ${WORDPRESS_DB_USER}
      WORDPRESS_DB_PASSWORD: ${WORDPRESS_DB_PASSWORD}
volumes:
  db_data: {}
</code></pre>
<p><strong>With Configu:</strong></p>
<pre><code class="lang-plaintext">version: '3'
services:
  db:
    image: mysql:8
    volumes:
      - db_data:/var/lib/mysql
    restart: always
~/Configu helps you remove this environment variables in file
  environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
/~
wordpress:
    depends_on:
      - db
    image: wordpress:latest
    ports:
      - 8000:80
    restart: always
~/Configu helps you remove this environment variables in file
    environment:
      WORDPRESS_DB_HOST: ${WORDPRESS_DB_HOST}
      WORDPRESS_DB_USER: ${WORDPRESS_DB_USER}
      WORDPRESS_DB_PASSWORD: ${WORDPRESS_DB_PASSWORD}~~
/~
volumes:
  db_data: {}
</code></pre>
<ul>
<li><p>As you must have noticed, we’re using variables like <code>${WORDPRESS_DB_HOST}</code> in the environment, So we need to define their values. Hence, We’re going to create a <code>cfgu.json</code> file to define them.</p>
</li>
<li><p>I have created a <code>file.cfgu.json</code> file i.e. shared below, for our WordPress installation. This will describe our software and will serve as our configuration declaration.</p>
</li>
</ul>
<pre><code class="lang-plaintext">{
    "MYSQL_ROOT_PASSWORD": {
      "type": "RegEx",
      "pattern": "/^(?=.*[\\!\\@\\#\\$\\%\\^\\&amp;\\*])[A-Za-z0-9\\!\\@\\#\\$\\%\\^\\&amp;\\*]+$/",
      "required": "true",
      "description": "Password for the MySQL Database"
    },
    "MYSQL_DATABASE": {
      "type": "String",
      "default": "wordpress"
    },
    "MYSQL_USER": {
      "type": "String",
      "default": "wordpress"
    },
    "MYSQL_PASSWORD": {
      "type": "RegEx",
      "pattern": "/^(?=.*[\\!\\@\\#\\$\\%\\^\\&amp;\\*])[A-Za-z0-9\\!\\@\\#\\$\\%\\^\\&amp;\\*]+$/",
      "required": "true",
      "description": "Password for the MySQL Database"
      },
    "WORDPRESS_DB_HOST": {
      "type": "String",
      "default": "db:3306"
    },
    "WORDPRESS_DB_USER": {
      "type": "String",
      "default": "wordpress"
    },
    "WORDPRESS_DB_PASSWORD": {
      "type": "String",
      "default": "password"
    }
  }
</code></pre>
<ul>
<li>Once you’re done creating the JSON file, Run the below command to verify and export the values.</li>
</ul>
<pre><code class="lang-plaintext">Rohit@Rohits-MacBook-Pro wordpress-project % configu eval \\
 --schema './file.cfgu.json' \\
| configu export \\
--format 'JSON' \\
</code></pre>
<h3 id="heading-upsert-values">Upsert values</h3>
<ul>
<li><p>If you want to make any changes to the configuration in the <code>.cfgu.json</code> file, use <code>configu upsert</code> and set a path to assign configuration values to those variables.</p>
</li>
<li><p>In our case, I want to add <code>WORDPRESS_2FA_SECRET</code> and <code>WORDPRESS_2FA_PROVIDER</code> variables. This will allow us to authenticate WordPress using two-factor authentication by generating and validating 2FA tokens.</p>
</li>
</ul>
<p>For <strong>Example</strong>:</p>
<pre><code class="lang-plaintext">configu upsert --store "configu" --set "prod" --schema "./file.cfgu.json" \\
    -c "WORDPRESS_2FA_SECRET=ROHITHELLO" \\ -c "WORDPRESS_2FA_PROVIDER=None" \\
</code></pre>
<p>Run the below command to verify and export the values to the newly created <code>configu</code> store.</p>
<pre><code class="lang-plaintext">Rohit@Rohits-MacBook-Pro wordpress-project % configu eval \\
--store 'configu' --schema './file.cfgu.json' \\
| configu export \\
--format 'JSON' \\
</code></pre>
<p><strong>Expected Output:</strong></p>
<pre><code class="lang-plaintext">{
  "MYSQL_ROOT_PASSWORD": "admin",
  "MYSQL_DATABASE": "wordpress",
  "MYSQL_USER": "wordpress",
  "MYSQL_PASSWORD": "password",
  "WORDPRESS_DB_HOST": "db:3306",
  "WORDPRESS_DB_USER": "wordpress",
  "WORDPRESS_DB_PASSWORD": "password",
  "WORDPRESS_2FA_SECRET": "ROHITHELLO",
  "WORDPRESS_2FA_PROVIDER": "None"
}
</code></pre>
<ul>
<li><p>If you get the expected output, you have successfully configured Configu Schema. Now, Let’s complete our installation with the last step using configu. Running a local environment was never easier; Run Configu seamlessly with your <code>docker-compose</code> by including default commands.</p>
</li>
<li><p>Run the below command for local deployment:</p>
</li>
</ul>
<pre><code class="lang-plaintext">configu eval --schema "./file.cfgu.json" | configu export --run "docker compose up -d"
</code></pre>
<ul>
<li>Run the below command for production(<code>prod</code>) deployment</li>
</ul>
<pre><code class="lang-plaintext">configu eval —store “configu” —set “prod” --schema "./file.cfgu.json" | configu export --run "docker compose up -d”
</code></pre>
<p>You should expect the below output after the downloading and pulling of images are completed:</p>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*yasNzs26yP7I0Cf7" alt /></p>
<ul>
<li>Run “<code>docker ps</code>” to check whether your WordPress and MySQL container is running smoothly!</li>
</ul>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*8X85cEibQz1hmXqP" alt /></p>
<ul>
<li>Use WordPress container port[<code>0.0.0.0:8000</code>-&gt;80/tcp] to access the WordPress installation from your browser.</li>
</ul>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*2p5DzATRDIS1xDxw" alt /></p>
<ul>
<li>Input the necessary information to create a website.</li>
</ul>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*Q4oTWJjXbJwFSdpH" alt /></p>
<ul>
<li>If you received the below output, Congratulations! your account is ready to log in.</li>
</ul>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*dUNHfjabKydVz-y1" alt /></p>
<ul>
<li>Input the account details you created in <code>Log In</code> for your Configu Demo project.</li>
</ul>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*n3M6fMVaLGJdMhFM" alt /></p>
<ul>
<li>Hurray! Your WordPress is Up and Running… Feel free to create a sample blog and publish it.</li>
</ul>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*qv5E0IFeourzq1kD" alt /></p>
<ul>
<li>I hope you enjoyed this tutorial and were amazed by the capabilities of configu open source project for configuration as a code. Check the sample blog shared below and create your own.</li>
</ul>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*p-yVnl68suHnyt-n" alt /></p>
<h3 id="heading-video-tutorial">Video Tutorial</h3>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.loom.com/share/0ebdb6ba045b478dbe883ded97034af1?sid=c01820f4-bc0b-41e7-94be-1181618b5537">https://www.loom.com/share/0ebdb6ba045b478dbe883ded97034af1?sid=c01820f4-bc0b-41e7-94be-1181618b5537</a></div>
<p> </p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Today’s tutorial was successful as we created WordPress efficiently using Docker Compose. Our step-by-step guide taught us how to install WordPress effortlessly while simultaneously configuring its details using the Configu Open Source project. By following this method, we ensured secure and expedited installations and deployments from the CLI, making it a much simpler process than it used to be. Overall, this tutorial was incredibly informative and gave us invaluable knowledge on effectively using Docker Compose and Configu Open Source for WordPress installations. I enjoyed working with the configu open-source project as it diminished my pain in managing secrets, variables, keys, and configuration files; hence, I can focus more on my workloads and deployments than any unnecessary struggles.</p>
<p>If you also liked it, give them a <a target="_blank" href="https://github.com/configu/configu/">star</a>(⭐️) or even contribute. (<strong>Hacktoberfest</strong> is a great time to start your OSS journey.)</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/configu/configu/">https://github.com/configu/configu/</a></div>
<p> </p>
<h3 id="heading-configuration-as-code-with-configu">Configuration-as-Code with Configu</h3>
<p>Configu is a configuration management platform comprised of two main components:</p>
<h4 id="heading-configu-orchestrator">Configu Orchestrator</h4>
<p>As applications become more dynamic and distributed in microservices architectures, configurations are getting more fragmented. They are saved as raw text that is spread across multiple stores, databases, files, git repositories, and third-party tools (a typical company will have five to ten different stores). The <a target="_blank" href="https://github.com/configu/configu"><strong>Configu Orchestrator</strong></a>, an open-source software, is a powerful standalone tool designed to address this challenge by providing a unified interface and the Configuration-as-Code (CaC) approach.</p>
<h4 id="heading-configu-cloud-platform">Configu Cloud Platform</h4>
<p><a target="_blank" href="https://app.configu.com/"><strong>Configu Cloud</strong></a> is the most innovative store purpose-built for configurations, including environment variables, secrets, and feature flags. It is built based on the Configu configuration-as-code (CaC) approach and can model configurations and wrap them with unique layers, providing collaboration capabilities, visibility into configuration workflows, and security and compliance standardization. Unlike legacy tools, which treat configurations as unstructured data or key-value pairs, Configu leads the way with a Configuration-as-Code approach. They are treated as first-class citizens in the developers’ code by modeling configurations. This makes our solution more robust and reliable and also enables Configu to provide more capabilities, such as visualization, a testing framework, and security abilities.</p>
<blockquote>
<p>Feel free to reach out to me anytime on 👉 <a target="_blank" href="http://devopsguys.in">Rohit Ghumare</a></p>
</blockquote>
<p><strong>Subscribe to my monthly newsletter here:</strong></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="http://thatdevopsguy.substack.com">http://thatdevopsguy.substack.com</a></div>
<p> </p>
<blockquote>
<p>❤️ Follow <a target="_blank" href="https://medium.com/u/d56f7e24e79e">Rohit Ghumare</a> for more such awesome blogs</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[A Guide to Hacktoberfest 2023: Your Path to Open-Source Contribution]]></title><description><![CDATA[Being an open-source promoter, I have been actively contributing to open-source, which helped me land my first remote global job. You can do it too, but never aim for open-source as a road to Remote Jobs; if you want to make an impact in society free...]]></description><link>https://blog.devopscommunity.in/a-guide-to-hacktoberfest-2023-your-path-to-open-source-contribution</link><guid isPermaLink="true">https://blog.devopscommunity.in/a-guide-to-hacktoberfest-2023-your-path-to-open-source-contribution</guid><category><![CDATA[#hacktoberfest ]]></category><category><![CDATA[Hacktoberfest2023]]></category><category><![CDATA[hacking]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[opensource]]></category><dc:creator><![CDATA[Rohit Ghumare]]></dc:creator><pubDate>Mon, 02 Oct 2023 16:45:49 GMT</pubDate><content:encoded><![CDATA[<p>Being an open-source promoter, I have been actively contributing to open-source, which helped me land my first <a target="_blank" href="https://ghumare64.medium.com/remote-job-roadmap-276c2d61c0cc">remote global job</a>. You can do it too, but never aim for open-source as a road to Remote Jobs; if you want to make an impact in society freely, this is the way, and Hacktoberfest is the festival for Open-source contributors.</p>
<p><img src="https://cdn-images-1.medium.com/max/1600/1*VByQKzw5qxoaryqGe9a2nw.png" alt /></p>
<h4 id="heading-introduction"><strong>Introduction</strong></h4>
<p>Welcome to <a target="_blank" href="https://hacktoberfest.com/">Hacktoberfest 2023</a>, the celebration of open-source software where developers worldwide come together to contribute, collaborate, and learn. Whether you’re a seasoned developer or just starting your coding journey, this guide will walk you through the steps to make meaningful contributions during this event.</p>
<blockquote>
<p><strong><em>Hacktoberfest has grown from 676 participants in 2014 to nearly 147,000 participants last year.</em></strong></p>
</blockquote>
<h4 id="heading-prepare-your-toolkit"><strong>Prepare Your Toolkit</strong></h4>
<p>To embark on your Hacktoberfest adventure, make sure your development toolkit is in top shape. Install essential tools like Git for version control, choose a reliable code editor, and ensure you have installed the necessary programming languages or frameworks on your system. Having a well-equipped setup will empower you to dive into open-source projects seamlessly.</p>
<h4 id="heading-choose-your-domain"><strong>Choose Your Domain</strong></h4>
<p>Selecting a programming language or framework that resonates with your interests and expertise is crucial. It's your decision; whether you prefer the versatility of JavaScript, the simplicity of Python, the robustness of Java, or specialized frameworks like React or Flask, align your choice with your passion. This alignment will make your contributions more enjoyable and impactful. So you don't ever get bored and feel pressured in the open-source journey.</p>
<h4 id="heading-learn-version-control-basics"><strong>Learn Version Control Basics</strong></h4>
<p>Understanding the fundamentals of Git is paramount in the world of open-source collaboration. Familiarize yourself with Git concepts such as forking repositories, creating branches, committing changes, and submitting pull requests. Numerous <a target="_blank" href="https://x.com/ghumare64/status/1702206015582666881?s=20">online tutorials</a> are available to help you grasp these essential version control skills, ensuring you can confidently participate in collaborative coding efforts.</p>
<p><a target="_blank" href="https://github.com/MattIPv4/hacktoberfest-data/blob/master/generated/users_engaged_linked_providers_bar.png"><img src="https://cdn-images-1.medium.com/max/1600/0*K2vUzuob6d2zb0W7" alt /></a></p>
<h4 id="heading-explore-open-source-platforms"><strong>Explore Open Source Platforms</strong></h4>
<p>Dive into the open-source ecosystem by exploring platforms like GitHub, GitLab, and Bitbucket. Look for repositories tagged with ‘<code>Hacktoberfest</code>’ or seek out issues labeled ‘<a target="_blank" href="https://hacktoberfest.com/about/">hacktoberfest’</a>. These platforms host a plethora of projects actively seeking contributions, providing you with a wide array of options to find a project that aligns with your interests and skills.</p>
<h4 id="heading-contribute-to-existing-projects"><strong>Contribute to Existing Projects</strong></h4>
<p>Start your contribution journey by tackling tasks labeled as ‘beginner-friendly’ or ‘good first issue’. These issues are typically well-defined and serve as an excellent starting point for newcomers. Fork a repository, implement the necessary changes, and submit a pull request. Always adhere to the project’s contributing guidelines and engage with the project maintainers and fellow contributors. Collaboration and communication are key to successful open-source contributions.</p>
<p><a target="_blank" href="https://github.com/MattIPv4/hacktoberfest-data/blob/master/generated/users_registrations_top_countries_bar_excl.png"><img src="https://cdn-images-1.medium.com/max/1600/0*Ak4uYv85NYHPUsfm" alt /></a></p>
<h4 id="heading-create-your-own-project"><strong>Create Your Own Project</strong></h4>
<p>If you have a unique project idea or a problem you’re passionate about solving, consider creating your own open-source project. Establish a repository on platforms like GitHub and invite others to collaborate. Cultivate an inclusive and welcoming environment where contributors can actively participate, learn, and grow together. Your project can be a valuable addition to the open-source community.</p>
<h4 id="heading-collaborate-and-communicate"><strong>Collaborate and Communicate</strong></h4>
<p>Active participation in open-source communities is invaluable. Join online forums, mailing lists, or social media groups related to your chosen technology. Don’t hesitate to ask questions and seek guidance when faced with challenges. The open-source community is renowned for its supportive atmosphere, where experienced developers are often more than willing to assist newcomers.</p>
<h4 id="heading-prioritize-quality-over-quantity"><strong>Prioritize Quality Over Quantity</strong></h4>
<p>When making contributions, emphasize quality over quantity. A well-thought-out bug fix, feature addition, or documentation improvement is more valuable than multiple rushed submissions. Take the time to understand the project’s context and ensure your contributions align with the overall goals and standards of the project.</p>
<h4 id="heading-review-and-improve"><strong>Review and Improve</strong></h4>
<p>Be open to feedback on your contributions. Reviewers might provide suggestions for improvements or changes. Embrace this feedback as a learning opportunity. Use it to enhance your skills, refine your code, and better understand the collaborative nature of open-source development. Iterative improvement is a cornerstone of successful contributions.</p>
<h4 id="heading-celebrate-your-achievements"><strong>Celebrate Your Achievements</strong></h4>
<p>Once your contributions are accepted and merged into the project, take a moment to celebrate your achievements. You’ve positively impacted the open-source community, contributing your expertise and passion to a shared project. Your efforts have not only improved the project but also enriched your own skills and knowledge. Acknowledge your hard work and the positive difference you’ve made.</p>
<h4 id="heading-participate-in-local-events"><strong>Participate in Local Events</strong></h4>
<p>Consider attending local Hacktoberfest meetups, workshops, or coding events if they are available in your area. These events provide valuable opportunities to meet fellow developers, network, and engage in collaborative coding sessions. Participating in local events can enhance your sense of community and expand your network within the developer community.</p>
<h4 id="heading-practice-ethical-and-respectful-behavior"><strong>Practice Ethical and Respectful Behavior</strong></h4>
<p>Lastly, always adhere to the code of conduct of the projects and communities you contribute to. Treat all contributors with respect, kindness, and professionalism. Open-source collaboration is built on mutual respect and inclusivity. By practicing ethical and respectful behavior, you contribute to creating a positive and supportive environment for everyone involved.</p>
<h3 id="heading-contribute-to-configu-at-hacktoberfest">Contribute to Configu at Hacktoberfest</h3>
<p>As we celebrate Hacktoberfest, <a target="_blank" href="https://github.com/configu/configu/"><strong>Configu</strong></a> invites you to be part of its mission to redefine software configuration management. Configu set out to tackle the persistent challenge of configuration chaos, and we’re making strides every day.</p>
<p>If you’re searching for a place to make a significant impact this Hacktoberfest, consider Configu. Delve into our open-source repository, understand our vision, and contribute to shaping our journey. If you’re unsure where to begin or need help, our <a target="_blank" href="https://discord.gg/cjSBxnB9z8"><strong>Configu Discord community</strong></a> is always here to guide you. For newcomers, I recommend starting with issues labeled ‘<a target="_blank" href="https://github.com/configu/configu/labels/good%20first%20issue"><strong>good-first-issues</strong></a>‘.</p>
<h4 id="heading-conclusion"><strong>Conclusion</strong></h4>
<p>As you embark on your Hacktoberfest 2023 journey, remember that this event is not just about the tangible rewards but also about the knowledge gained, the friendships formed, and the positive impact you make on the global open-source community. By following these steps, you are well-equipped to contribute meaningfully, learn profoundly, and celebrate your achievements. Happy coding and happy contributing! Here’s to a successful and fulfilling Hacktoberfest 2023!</p>
<hr />
<h3 id="heading-important-resources-for-open-source-contributions">Important Resources for Open Source Contributions</h3>
<ul>
<li><p><a target="_blank" href="https://hacktoberfest.com/">Hacktoberfest’s Website</a></p>
</li>
<li><p><a target="_blank" href="https://docs.github.com/en/get-started/quickstart/contributing-to-projects">How to contribute to Open-Source by GitHub</a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/devteam/hacktoberfest-2023-sneak-peek-at-badges-whats-to-come-4h22">DEV’s Blog on Hacktoberfest</a>.</p>
</li>
<li><p><a target="_blank" href="https://hacktoberfest.com/participation/">How to get started with Hacktoberfest</a>.</p>
</li>
</ul>
<p><em>Remember, every contribution counts! Have a fantastic</em> <strong><em>Hacktoberfest 2023</em></strong>!</p>
<p><strong>Subscribe to my monthly newsletter here:</strong></p>
<p><a target="_blank" href="https://thatdevopsguy.substack.com/"><strong>DevOps Advocate Newsletter</strong></a></p>
<blockquote>
<p>❤️ Join <a target="_blank" href="http://blog.devopscommunity.in">DevOps Community</a> for more such awesome blogs.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Real Time aws log analytics project using native services]]></title><description><![CDATA[Welcome to the Wiki! 😊

📝Table of Contents:

Introduction to Log Analytics

Understanding AWS Native Services

Setting up AWS CloudWatch Logs

Configuring AWS CloudTrail for Log Collection

Analyzing Logs with Amazon Athena

Creating Real-time Dash...]]></description><link>https://blog.devopscommunity.in/real-time-aws-log-analytics-project-using-native-services</link><guid isPermaLink="true">https://blog.devopscommunity.in/real-time-aws-log-analytics-project-using-native-services</guid><category><![CDATA[AWS]]></category><category><![CDATA[projects]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[learning]]></category><dc:creator><![CDATA[Deactivated User]]></dc:creator><pubDate>Thu, 07 Sep 2023 18:14:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685385584496/b17debf5-03a2-42c8-90cd-e21ff4fba101.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-welcome-to-the-wiki">Welcome to the Wiki! 😊</h2>
<p><img src="https://github.com/Radiant690/RealtimeAWSLogAnalyzer/assets/95444132/6386b9fb-363f-464b-8411-097ac167da9e" alt="WorkingBusyGIF" /></p>
<h3 id="heading-table-of-contents">📝Table of Contents:</h3>
<ol>
<li><p>Introduction to Log Analytics</p>
</li>
<li><p>Understanding AWS Native Services</p>
</li>
<li><p>Setting up AWS CloudWatch Logs</p>
</li>
<li><p>Configuring AWS CloudTrail for Log Collection</p>
</li>
<li><p>Analyzing Logs with Amazon Athena</p>
</li>
<li><p>Creating Real-time Dashboards with Amazon CloudWatch Logs Insights</p>
</li>
<li><p>Building Custom Log Monitoring Solutions with Amazon Kinesis Data Firehose</p>
</li>
<li><p>Implementing Log Analysis with Amazon Elasticsearch Service</p>
</li>
<li><p>Utilizing AWS Glue for Log Data ETL Processes</p>
</li>
<li><p>Applying Machine Learning to Log Analytics with Amazon SageMaker</p>
</li>
<li><p>Securing Log Data with AWS Identity and Access Management</p>
</li>
<li><p>Integrating Log Analytics with AWS Lambda</p>
</li>
<li><p>Scaling Log Analytics with Amazon Redshift</p>
</li>
<li><p>Monitoring and Alerting Strategies for Log Analytics</p>
</li>
<li><p>Best Practices for Real-time AWS Log Analytics</p>
</li>
</ol>
<h3 id="heading-motivation">📝 Motivation:</h3>
<p>"Things work well when understood well".</p>
<p>Personally projects are half a mile properly dealt with source code alone. To assist a document answering the queries like why, what, how is the key.</p>
<h3 id="heading-introduction">📝 Introduction:</h3>
<p>➡️ Welcome to the fascinating world of "Real Time AWS Log Analytics: Harnessing Native Services"! This project guides you through one of the most important indemand task of log analytics and guides with the usage of AWS native services for performing real-time log analytics.</p>
<p>➡️ In today's data-driven landscape, organizations generate massive volumes of log data from various sources such as applications, servers, network devices, and security systems. The ability to analyze this data in real time is crucial for detecting anomalies, identifying security threats, and gaining valuable insights to optimize operations.</p>
<p>➡️ The Documentation provided (Github Wiki) will strongly assist thorughout the project will detailed statements and workflows. Inside, you'll discover a roadmap to leverage AWS native services to build scalable and efficient log analytics solutions. We'll take you step-by-step through each chapter, providing clear explanations, practical examples, and best practices to ensure you gain a deep understanding of the concepts and techniques.</p>
<p>➡️ But this is not your typical technical guide. We've crafted this to be human-friendly and engaging, making your learning experience enjoyable. Along the way, you'll find vibrant visuals, helpful diagrams, and even some fun emojis to keep you motivated and excited about the possibilities.</p>
<p>➡️ So, whether you're sipping your favorite coffee or curling up in a cozy corner, get ready to dive into the exciting world of real-time log analytics using AWS native services. Unleash the power of your log data, uncover hidden insights, and pave the way for your organization's success. Remember, the sky's the limit when it comes to harnessing the potential of AWS log analytics! ➡️ So, whether you're sipping your favorite coffee or curling up in a cozy corner, get ready to dive into the exciting world of real-time log analytics using AWS native services. Unleash the power of your log data, uncover hidden insights, and pave the way for your organization's success. Remember, the sky's the limit when it comes to harnessing the potential of AWS log analytics!</p>
<p>Let's begin this incredible journey together! 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Deploy a Node.js App using Jenkins on AWS EC2 instances]]></title><description><![CDATA[Hello Everyone 👋 In this blog, we are going to deploy a simple Node.js project built on an EC2 instance using Jenkins.

What are we going to learn from this Project?

Set up Jenkins on AWS EC2 instance.

Create a security group for Jenkins.

Connect...]]></description><link>https://blog.devopscommunity.in/deploy-a-nodejs-app-using-jenkins-on-aws-ec2-instances</link><guid isPermaLink="true">https://blog.devopscommunity.in/deploy-a-nodejs-app-using-jenkins-on-aws-ec2-instances</guid><category><![CDATA[BlogsWithCC]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Jenkins]]></category><dc:creator><![CDATA[Yash Pimple]]></dc:creator><pubDate>Wed, 08 Mar 2023 12:58:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1677999485705/dfac187f-3de2-4dc0-8c40-f7342b9cfeba.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Everyone 👋 In this blog, we are going to deploy a simple Node.js project built on an EC2 instance using Jenkins.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677768568105/4dfd6dde-1fc7-4eea-b83a-5af63cf45e19.png" alt class="image--center mx-auto" /></p>
<p>What are we going to learn from this Project?</p>
<ul>
<li><p>Set up Jenkins on AWS EC2 instance.</p>
</li>
<li><p>Create a security group for Jenkins.</p>
</li>
<li><p>Connect to Jenkins instance using EC2 security groups</p>
</li>
<li><p>Install Jenkins on the EC2 instance</p>
</li>
<li><p>Creating a Dockerfile</p>
</li>
<li><p>Manage some Jenkins plugins and learn about Webhooks works</p>
</li>
<li><p>Create a job for automating CI/CD deployment</p>
</li>
<li><p>How to integrate Jenkins and GitHub</p>
</li>
</ul>
<p>Here's the Workflow for our Project ⬇️</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677768903574/70ecb317-2b10-4b18-9944-c45333f7612f.png" alt class="image--center mx-auto" /></p>
<p>Github Repo: <a target="_blank" href="https://github.com/YashPimple/Jenkins-CI-CD-with-GitHub-Integration">Repositor</a>y</p>
<p>Note: Before getting started with the Project you need to have an AWS Tier account and for setting up your AWS account if you are using it for the first time, you can check out this <a target="_blank" href="https://youtu.be/FRQ9fE4fd5g">video</a></p>
<p>So let's get started</p>
<h3 id="heading-set-up-jenkins-on-aws-ec2-instance">Set up Jenkins on AWS EC2 instance</h3>
<p>Let start by creating an EC2 instance and running Jenkins on it</p>
<ul>
<li><p>Search for EC2 in the Search bar on the top and click on it</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677924454248/0980395f-65a3-44c6-ac76-36d984187067.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Now let's launch an instance by clicking on '<strong>Launch instances</strong>'</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677924582215/670103f4-691e-497b-961e-8f691432b609.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Now, enter some details for your instance, name it 'Jenkins-master'</p>
</li>
<li><p>I am opting for Ubuntu Server(free tier) here.</p>
</li>
<li><p>Make sure that the selected server you are opting for, lies in the free tier</p>
</li>
<li><p>Choose the size of the instance "<strong>t2.micro</strong>" or anything that lies in the free tier.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677924482187/c5875f5a-21dd-45e4-a34a-3517177075f8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>And now Let's create a key pair for your instance, name it <code>aws-jenkins</code> choose "<code>.pem"</code>, if you are going to use SSH to connect to your instance, or "<code>.ppk</code>" if you are using putty &amp; download it and now launch the instance.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677924504951/ef4d0c1c-57d2-44d7-ac82-c54206afb33f.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>And Wait for the Status Check to complete (Running)</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677924745284/16b203ca-23e7-4d60-b7a1-6eeca639250d.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-now-it-is-time-to-connect-of-instanceserver">Now it is time to Connect of instance(Server)</h3>
<ul>
<li><p>Click on “Connect” and select the "EC2 instance connect” option.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677925085276/5780bf49-33ea-4412-9680-2ae71feea3e1.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677924908150/30610fee-7b45-44e0-8e08-9def8e225634.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Congratulations, you have now connected to the instance</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677925223150/cabb96eb-df1f-4aaf-988a-78b11cdc55cb.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-install-jenkins-on-your-instance"><strong>Install Jenkins on your Instance</strong></h3>
<ul>
<li><p>We are using <code>sudo</code> it as a prefix with every command, you can also do <code>sudo -su</code> this in the beginning to avoid that.</p>
</li>
<li><p>Enter the command, for a quick update of all the software packages on your instance</p>
</li>
<li><p>Update your system</p>
<pre><code class="lang-bash">  sudo apt update
</code></pre>
</li>
<li><p>Install java</p>
<pre><code class="lang-bash">  sudo apt install openjdk-11-jre
</code></pre>
</li>
<li><p>Validate installation</p>
<pre><code class="lang-bash">  java -version
</code></pre>
</li>
<li><p>It should look something like this</p>
<pre><code class="lang-bash">  openjdk version <span class="hljs-string">"11.0.12"</span> 2021-07-20 OpenJDK Runtime Environment (build 11.0.12+7-post-Debian-2) OpenJDK 64-Bit Server VM (build 11.0.12+7-post-Debian-2, mixed mode, sharing)
</code></pre>
</li>
</ul>
<ul>
<li><p><em>Install Jenkins</em></p>
<p>  Just copy these commands and paste them onto your terminal.</p>
<pre><code class="lang-bash">  curl -fsSL https://pkg.jenkins.io/debian/jenkins.io.key | sudo tee \   /usr/share/keyrings/jenkins-keyring.asc &gt; /dev/null
</code></pre>
<pre><code class="lang-bash">  <span class="hljs-built_in">echo</span> deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \   https://pkg.jenkins.io/debian binary/ | sudo tee \   /etc/apt/sources.list.d/jenkins.list &gt; /dev/null
</code></pre>
<pre><code class="lang-bash">  sudo apt-get update
</code></pre>
<pre><code class="lang-bash">  sudo apt-get install jenkins
</code></pre>
</li>
<li><p>Start Jenkins</p>
<pre><code class="lang-bash">  sudo systemctl <span class="hljs-built_in">enable</span> jenkins
</code></pre>
<pre><code class="lang-bash">  sudo systemctl start jenkins
</code></pre>
<pre><code class="lang-bash">  sudo systemctl status jenkins
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677999331388/a5a5c17c-9ad5-4615-b7a2-341de51d31d9.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Now copy the public IP address of the Jenkins instance, which is present in the details of the instance</p>
</li>
<li><p>Enter this IP address with the port number, i.e., "<code>&lt;ip_address:port_number&gt;</code>"- "18.182.13.97:8080"</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677992758766/dba5b4c3-c8d6-4a48-bba1-0038deb01ca4.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677998040061/5f67e6b9-ba12-40a5-9015-e0229f279146.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Now, get the password by entering the following command and enter it in the text box</p>
<pre><code class="lang-bash">    sudo cat /var/lib/jenkins/secrets/initialAdminPassword
</code></pre>
</li>
<li><p>Click on install the suggested plugins</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675595470928/5e440859-4bf1-48b1-8792-682c303d1d42.png?auto=compress,format&amp;format=webp" alt /></p>
</li>
<li><p>Go ahead and enter your username and stuff as asked</p>
</li>
<li><p>You are ready to use Jenkins🎉</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-creating-a-container">Creating a Container</h3>
<ul>
<li><p>After running Jenkins on our host now is time to create a Dockerfile to create a Dockerfile we need to install it first docker in our instance.</p>
<pre><code class="lang-bash">  sudo apt install docker.io
  sudo usermod -a -G docker <span class="hljs-variable">$USER</span>
</code></pre>
<pre><code class="lang-yaml">  <span class="hljs-string">FROM</span> <span class="hljs-string">node:12.2.0-alpine</span>
  <span class="hljs-string">WORKDIR</span> <span class="hljs-string">app</span>
  <span class="hljs-string">COPY</span> <span class="hljs-string">.</span> <span class="hljs-string">.</span>
  <span class="hljs-string">RUN</span> <span class="hljs-string">npm</span> <span class="hljs-string">install</span>
  <span class="hljs-string">EXPOSE</span> <span class="hljs-number">8000</span>
  <span class="hljs-string">CMD</span> [<span class="hljs-string">"node"</span>,<span class="hljs-string">"app.js"</span>]
</code></pre>
</li>
<li><p>Now its time to make docker image and container out of the docker file</p>
<pre><code class="lang-bash">  sudo docker build . -t node-app

  sudo docker run -d -p 8000:8000 todo-node-app
</code></pre>
</li>
<li><p>To see our Project running we need to add port in our security group of the instance</p>
</li>
<li><p>Follow the same steps, as followed in adding an inbound port for Jenkins, with some minor changes.</p>
</li>
<li><p>One inbound rule need to be added</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677992675242/ef162864-c2b1-4f0e-9290-3f4270561ce3.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Now copy the public IP address of the Jenkins master instance, which is present in the details of the instance</p>
</li>
<li><p>Enter this IP address with the port number, i.e., "<code>&lt;ip_address:port_number&gt;</code>" - "18.182.13.97:8000"</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677993087587/02fb0bbd-b1a9-41c2-8787-e8a5e87b45a1.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Here you have finally deployed your node project using AWS now ⬇️</p>
</li>
</ul>
<h3 id="heading-automating-cicd-using-jenkins">Automating CI/CD using Jenkins</h3>
<ul>
<li><p>First, you need to create a New Jenkins freestyle Pipeline</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677993753071/c7529635-b9ed-49bb-8e0e-cbbf07035665.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>After creating a Job we need to create a private and public key which will be used to integrate our GitHub and Jenkins for the Continuous Integration part.</p>
</li>
<li><p>Going back to our instance to create public and private key we need to follow up the following commands.</p>
</li>
</ul>
<pre><code class="lang-bash">ssh-keygen
<span class="hljs-built_in">cd</span> .ssh
ls
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677994198874/a644165e-9ca5-4aa0-b710-d708cd9f551b.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash">sudo cat id_rsa.pub
</code></pre>
<p>Where <code>id_rsa.pub</code> stands for Public Key and <code>id_rsa stands</code> for Private Key</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677994219988/3ad3c148-ac8e-4b69-afec-d9916284248a.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash">sudo cat id_rsa
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677994231473/57ce727a-229e-43e9-b961-8a877b19e4f1.png" alt /></p>
<ul>
<li><p>After generating the Public and Private Key now we need to connect our Jenkins with Github for that we need to create an SSH KEY.</p>
</li>
<li><p>Go the Github Settings &gt; click on SSH and GPG keys &gt; Give any title and paste the Public Key we generated &gt; click on Add SSH Key</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677994427927/4381074b-6dc3-444a-880f-82c133e10b4c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Now let's go back to Jenkins after creating the freestyle Project now we need to add the respective credentials</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677994632130/8e012021-0c17-4573-a3cb-6f1e2a03c3e9.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Give the Jenkins Project a Description after that add the Repo url "<a target="_blank" href="https://github.com/YashPimple/Jenkins-CI-CD-with-GitHub-Integration">Github Link</a>"</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677994816101/f164249a-f1e8-4688-994a-f844b88f8ca8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>In the Source Code Management, choose git and enter the repository URL, by clicking on code.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677994834248/25ed1453-7028-40a2-a555-3b6c4f2e0191.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Create a new credential by clicking on add and new pop-up screen will appear overhere Select kind to be "<code>SSH Username with private key</code>", let the scope be global.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677995454120/a92b47a2-6688-418d-b0e6-1230971164e1.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Further, add the Private Key generate in the instance and paste it over here and click on add button to save it.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677995545466/31bfeeca-6e96-49c8-b3f0-5e01d0ff07e2.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>After select the ubuntu credentials and Since all the code is in the master branch in the given project, we will use the master branch.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677995773582/4df38544-1ef9-4252-90ac-ec9662a1c2ae.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677995827889/2cbd5240-e5ec-4af2-b2ac-f4490aa3b5c2.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Here click on the "Github hook trigger for GITScm polling" tick is so the any chance made in the Github code base can be reflect over here and a new build can be executed for that we need to make a small change in our Project Repository and all this work is done by Github Webhook &gt; Go to Project Repo Settings &gt; Webhook &gt; Add webhook</p>
</li>
<li><p>Over their you need to enter your this IP address with the port number, i.e., "<code>&lt;ip_address:port_number/github-webhook/&gt;</code>"- "<a target="_blank" href="http://54.250.242.79:8080/"><code>http://54.250.242.79:8080/</code></a><code>github-webhook/</code>" and select content type as "<code>application/json</code>" and click on "<code>Add webhook</code>"</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677996473985/b5710295-55ea-4dd0-9f79-bd8235f8a041.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Then let's go back to Jenkins Pipeline to work on the continuous deployment part for creating our docker image.</p>
</li>
<li><p>Now Let build steps to create our Docker container</p>
</li>
<li><p>Select Execute Shell and all the following CMD</p>
<pre><code class="lang-yaml">  <span class="hljs-string">docker</span> <span class="hljs-string">build</span> <span class="hljs-string">.</span> <span class="hljs-string">-t</span> <span class="hljs-string">node-app-todo</span>
  <span class="hljs-string">docker</span> <span class="hljs-string">run</span> <span class="hljs-string">-d</span> <span class="hljs-string">-p</span> <span class="hljs-number">8000</span><span class="hljs-string">:8000</span> <span class="hljs-string">node-app-todo</span>
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677996592218/82df4420-ceca-4851-9782-6568dba74f32.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Now save and click on “Build Now”</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677996927465/611190ac-32cc-4e70-a06b-36a218bccd0b.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677996937522/9f94094c-f99e-478f-a4d0-ef8deb1327bd.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Yay! 🚀 that's a SUCCESS!</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677997050905/fd18967e-b5cf-4922-8f52-fa1e8964a7e1.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>That's all for now. If you have any questions, please leave them in the comments section and I will do my best to answer them. Thanks for reading. Any feedbacks are welcome!! :). Connect with me on <a target="_blank" href="https://twitter.com/Yashpimple22"><strong>Twitter</strong></a>. Follow me for more such blogs on <a target="_blank" href="https://yashpimple.hashnode.dev/"><strong>Hashnode</strong></a>.</p>
<p>Follow DevOps Community on <a target="_blank" href="https://devopscommunity.hashnode.dev/"><strong>Hashnode</strong></a> and <a target="_blank" href="https://github.com/rohitg00/DevOpsCommunity"><strong>Github</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Linux Basics - Part 1]]></title><description><![CDATA[Linux is a widely used operating system that has gained popularity due to its many qualities. Here are five qualities of Linux that make it important and a base for most software:

Open Source: Linux is an open-source operating system, which means th...]]></description><link>https://blog.devopscommunity.in/linux-basics-part-1</link><guid isPermaLink="true">https://blog.devopscommunity.in/linux-basics-part-1</guid><category><![CDATA[Linux]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[command line]]></category><category><![CDATA[operating system]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Aman Srivastav]]></dc:creator><pubDate>Tue, 28 Feb 2023 08:42:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1677328349775/15367ccd-65c0-4d4a-b803-0cbea00b9539.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Linux is a widely used operating system that has gained popularity due to its many qualities. Here are five qualities of Linux that make it important and a base for most software:</p>
<ol>
<li><p><strong>Open Source:</strong> Linux is an open-source operating system, which means that its source code is freely available to anyone who wants to use it, modify it or distribute it. This allows developers to customize and improve the operating system according to their specific needs, making Linux highly adaptable and flexible.</p>
</li>
<li><p><strong>Stability and Reliability:</strong> Linux is known for its stability and reliability, which makes it a popular choice for servers and other mission-critical systems. The operating system is designed to be highly efficient and robust, with the ability to handle large workloads and provide uninterrupted service.</p>
</li>
<li><p><strong>Security:</strong> Linux is known for its strong security features, which make it less vulnerable to viruses and malware than other operating systems. This is due to its built-in security features, such as the ability to set permissions and restrict access to files and directories.</p>
</li>
<li><p><strong>Compatibility:</strong> Linux is highly compatible with a wide range of hardware and software, making it an ideal choice for developers who need to work with multiple platforms. This is because Linux supports a variety of programming languages and interfaces, making it easy to integrate with other systems.</p>
</li>
<li><p><strong>Customizability:</strong> Linux is highly customizable, allowing developers to modify and tailor the operating system to their specific needs. This means that Linux can be customized for a wide range of applications, from desktop computers to servers and embedded systems. This level of customizability also makes Linux an ideal choice for developers who want to create their custom distributions or applications.</p>
</li>
</ol>
<h2 id="heading-linux-kernel"><strong>Linux Kernel:</strong></h2>
<p>Before we dig into Kernel and its use, let's dig into why we need a kernel.</p>
<h3 id="heading-why-do-we-need-kernel-in-an-os"><strong>Why do we need Kernel in an OS?</strong></h3>
<p>We need a kernel in an operating system because it provides an interface between software and hardware. Without a kernel, the software would not be able to access hardware resources such as memory, input/output devices, and network devices. The kernel also provides important system services such as process management, memory management, and security features. The kernel is the foundation upon which the rest of the operating system is built, and it is essential for the proper functioning of the system.</p>
<h3 id="heading-introduction-to-linux-kernel"><strong>Introduction to Linux Kernel</strong></h3>
<p>The Linux Kernel is the core component of the Linux operating system. It is responsible for managing system resources and providing an interface for software to interact with hardware. The kernel is the first program that is loaded into memory when the system starts up, and it remains in memory for the entire time the system is running.</p>
<p>The kernel is designed to be highly configurable and customizable, allowing system administrators to tailor the system to their specific needs. It is also highly modular, allowing new functionality to be added to the system without requiring major modifications to the core kernel code.</p>
<h3 id="heading-linux-kernel-versions"><strong>Linux Kernel Versions</strong></h3>
<p>The Linux Kernel has gone through numerous revisions and updates since its initial release in 1991 by Linus Torvalds. Each version of the kernel is given a version number that consists of three parts: the major version number, the minor version number, and the patch level.</p>
<p>For example, the current stable version of the Linux Kernel as of February 2023 is 5.16.8, where 5 is the major version number, 16 is the minor version number, and 8 is the patch level.</p>
<p>The role of the Linux Kernel is to manage system resources and provide a layer of abstraction between software applications and the hardware. The kernel is responsible for many major tasks, including:</p>
<ul>
<li><p>Process and memory management: The kernel manages processes and allocates memory to them. It also controls the virtual memory system, which allows processes to use more memory than is physically available. Also determines where the CPU is being used and for how much time.</p>
</li>
<li><p>Input/output management: The kernel provides an interface for software to interact with hardware devices, including input/output devices such as keyboards, mice, and printers.</p>
</li>
<li><p>Network stack and device drivers: The kernel includes a network stack that provides networking functionality and device drivers that allow the software to communicate with hardware devices.</p>
</li>
<li><p>File system management: The kernel manages the file system, including access control and security.</p>
</li>
<li><p>Security and access control: The kernel provides access control and security features to protect the system and its data.</p>
</li>
</ul>
<h3 id="heading-linux-kernel-commands"><strong>Linux Kernel Commands</strong></h3>
<p>Here are some commonly used Linux Kernel commands:</p>
<ul>
<li><p><code>uname -r</code>: This command shows the kernel version of the running system.</p>
</li>
<li><p><code>ls /lib/modules</code>: This command lists the installed kernel modules.</p>
</li>
<li><p><code>lsmod</code>: This command lists the currently loaded kernel modules.</p>
</li>
<li><p><code>modprobe</code>: This command loads or unloads a kernel module.</p>
</li>
<li><p><code>dmesg</code>: This command displays the system message buffer, which contains information about the kernel and system events.</p>
</li>
<li><p><code>sysctl</code>: This command is used to modify and view kernel parameters at runtime.</p>
</li>
</ul>
<h1 id="heading-linux-boot-process-and-run-levels"><strong>Linux Boot Process and Run-levels:</strong></h1>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/a225c4ec-309b-48d5-abde-294574a2c962/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230226%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230226T201211Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=54802a224ad6141c4e322957698073f0666d368900322ba2c6df32a456d3cf18&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<p>The boot process of Linux can be broken down into several stages, each with its own tasks and responsibilities. These stages include:</p>
<ol>
<li><p>BIOS POST</p>
</li>
<li><p>Boot Loader</p>
</li>
<li><p>Kernel Initialization</p>
</li>
<li><p>INIT Process</p>
</li>
</ol>
<p>Let's explore each of these stages in detail.</p>
<p><strong>1. BIOS POST</strong></p>
<p>When you turn on a computer, the first thing that happens is a Power-On Self-Test (POST) performed by the Basic Input/Output System (BIOS). The POST checks the computer's hardware to make sure it is functioning properly, including the processor, memory, and input/output devices.</p>
<p>If the POST detects any issues, it will display an error message and prevent the computer from booting. If there are no issues, the BIOS will search for a bootable device, such as a hard drive or USB drive, and begin the boot process.</p>
<p><strong>2. Boot Loader</strong></p>
<p>The boot loader is responsible for loading the operating system kernel into memory. The most common boot loader for Linux is GRUB (GRand Unified Bootloader). When the BIOS hands over control to the boot loader, the boot loader displays a menu that allows the user to select which operating system they want to boot into.</p>
<p>The boot loader reads the kernel from the selected boot device and loads it into memory. The boot loader may also load an initial ramdisk (initrd), which is a temporary file system that contains essential files needed to boot the system, such as device drivers.</p>
<p><strong>3. Kernel Initialization</strong></p>
<p>Once the kernel has been loaded into memory, it begins the initialization process. The kernel first initializes the processor and memory management system. It then initializes the device drivers needed to access the hardware, such as the network card and storage devices.</p>
<p>The kernel also sets up the virtual file system and mounts the root file system. The root file system is the file system that contains the operating system and all its files.</p>
<p><strong>4. INIT Process</strong></p>
<p>The INIT process is the first user-space process that is started by the kernel. It is responsible for starting all the other user-space processes and services. The INIT process is identified by process ID 1 and is responsible for starting the system's run level.</p>
<p>The run-level determines which services and daemons should be started at boot time. The INIT process reads the configuration file (/etc/inittab) to determine which run-level to start. Each run-level has its own set of scripts that are executed to start the necessary services.</p>
<p>Once all the necessary services have been started, the system is ready to be used.</p>
<p>In conclusion, the Linux boot process involves several stages, each with its own responsibilities. The BIOS POST checks the hardware, the boot loader loads the kernel into memory, the kernel initializes the hardware and mounts the file system, and the INIT process starts the necessary user-space processes and services. Understanding the boot process can be helpful when troubleshooting boot-related issues.</p>
<p>In Linux, run-levels are a way to define the state in which the system should operate. Each run-level has a specific set of services that are started or stopped, which makes it possible to control what runs when the system is started up or shut down. The term run-levels is used in the <strong>SysV init</strong> systems. These have been replaced by systemd targets in <strong>systemd</strong> based systems.</p>
<p>The complete list of run-level and the corresponding systemd targets can be seen below:</p>
<ul>
<li><p>Runlevel 0: Halt or shut down the system.</p>
</li>
<li><p>Runlevel 1: Single-user mode, which is used for system maintenance tasks.</p>
</li>
<li><p>Runlevel 2: Multi-user mode without network services.</p>
</li>
<li><p>Runlevel 3: Multi-user mode with network services.</p>
</li>
<li><p>Runlevel 4: Not used by default, but can be configured for a specific purpose.</p>
</li>
<li><p>Runlevel 5: Multi-user mode with a graphical user interface.</p>
</li>
<li><p>Runlevel 6: Reboot the system.</p>
</li>
</ul>
<h2 id="heading-linux-basic-commands-and-their-uses"><strong>Linux Basic commands and their uses :</strong></h2>
<p>Linux command types are divided into two parts.</p>
<ul>
<li><p>Internal commands</p>
</li>
<li><p>External commands</p>
<p>  <img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/16433d00-55ed-4e45-8cd9-9bbb9d236183/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230225%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230225T121245Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=9f3fd56bffd26f12c7618b13240c92341efdc9ab46d2bed150e93199003f9861&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
</li>
</ul>
<p>Absolute and Relative Path :</p>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/a3034fc7-6adf-443e-af88-3b73010ed908/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230225%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230225T121351Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=81630416f07eed5c88de58d6be692ed8847c2a62d29410c85fe19d17911bb362&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/4e851ce8-dab4-479c-80e3-6f6067de865f/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230225%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230225T121402Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=6ca7f81711acb9f74d2d2d63deee163faf4efd1180cf747e82e0c40b62748c49&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<pre><code class="lang-bash">→ ls-al: Lists files and directories with detailed information like permissions,size, owner, etc.

→ ls: Lists all files and directories <span class="hljs-keyword">in</span> the present working directory

→ ls-R: Lists files <span class="hljs-keyword">in</span> sub-directories as well

→ ls-a: Lists hidden files as well

→ ls -alh: All the inofrmation of file and hidden files with the human redable format 

→ cp (<span class="hljs-built_in">source</span>) (destination): To copy a file 

→ cp -r (<span class="hljs-built_in">source</span>) (destination) : To copy a directry to another directory with all the contents inside the directory 

→ cp (<span class="hljs-built_in">source</span>) (destination) with the namme of file or dir you want:

→ <span class="hljs-built_in">cd</span> or <span class="hljs-built_in">cd</span> ~: Navigate to HOME directory

→ <span class="hljs-built_in">cd</span> ..: Move one level up

→ <span class="hljs-built_in">cd</span>: To change to a particular directory

→ <span class="hljs-built_in">cd</span> /: Move to the root directory

→ cat &gt; filename: Creates a new file

→ cat filename: Displays the file content

→ cat file1 file2 &gt; file3: Joins two files (file1, file2) and stores the output <span class="hljs-keyword">in</span> a new file (file3)

→ mv file <span class="hljs-string">"new file path"</span>: Moves the files to the new location

→ mv filename new_file_name: Renames the file to a new filename

→ sudo: Allows regular users to run programs with the security privileges of the superuser or root

→ rm filename: Deletes a file

→ rm -r: To delete directory with its files

→ man: Gives <span class="hljs-built_in">help</span> information on a <span class="hljs-built_in">command</span>

→ mkdir directory_name: Creates a new directory <span class="hljs-keyword">in</span> the present working directory or an at the specified path

→ rmdir: Deletes a directory

→ mv: Renames a directory
</code></pre>
<h3 id="heading-shell-types"><strong>Shell Types :</strong></h3>
<ul>
<li><p>Bourne Shell (sh)</p>
</li>
<li><p>C Shell (csh or tcsh)</p>
</li>
<li><p>Korn Shell (Ksh)</p>
</li>
<li><p>Z Shell (zsh)</p>
</li>
<li><p>Bourne again shell (bash)</p>
</li>
</ul>
<pre><code class="lang-bash">Note: 
**sockets*cores*thres=CPUs
eg: 1*4*2= 8 Parallels threads can run at a time
</code></pre>
<h3 id="heading-file-types-in-linux"><strong>File Types in Linux:</strong></h3>
<ol>
<li><p>Regular : Images, Scripts, configuration / Data files</p>
</li>
<li><p>Directory : /home/bob /root /home/bob/code-directory</p>
</li>
<li><p>Special File :</p>
<pre><code class="lang-bash">    1. Character Files 

    2. Block Files   

    3. Links        -     Hard Link &amp; Soft Link 

    4. Socket Files 

    5. Named Pipes
</code></pre>
</li>
</ol>
<p>Let's explain each special file type :</p>
<ol>
<li><p>Character Devices: Character devices are used for accessing hardware devices that are treated as a stream of characters, such as a keyboard, mouse, or serial port. These devices are accessed using input/output system calls, such as read() and write(). In the file system, character devices are represented by special files that reside in the /dev directory and have the prefix "c". For example, the file /dev/tty is a character device file that represents the current terminal.</p>
</li>
<li><p>Block Devices: Block devices are used for accessing hardware devices that are treated as a sequence of fixed-size blocks, such as hard drives and USB drives. These devices are accessed using input/output system calls, such as read() and write(). In the file system, block devices are represented by special files that reside in the /dev directory and have the prefix "b". For example, the file /dev/sda is a block device file that represents the first hard disk.</p>
</li>
<li><p>Named Pipes: Named pipes are used for inter-process communication between processes on the same system. They provide a way for processes to send and receive data using standard input/output system calls. In the file system, named pipes are represented by special files that reside in the /dev directory and have the prefix "p". For example, the file /dev/fifo is a named pipe file.</p>
</li>
<li><p>Sockets: Sockets are used for network communication between processes on different systems. They provide a way for processes to send and receive data over a network using standard input/output system calls. In the file system, sockets are represented by special files that reside in the /dev directory and have the prefix "s". For example, the file /dev/log is a socket file used by the system logging daemon.</p>
</li>
<li><p>Symbolic Links: Symbolic links, also known as soft links, are special files that point to another file or directory in the file system. They provide a way to create shortcuts to files and directories, and can be used to make file system navigation easier. In the file system, symbolic links are represented by special files that contain the path to the target file or directory. For example, the file /usr/local/bin/python is a symbolic link that points to the actual Python executable file.</p>
</li>
</ol>
<p>We can identify the type of file with the command ls -ld and the first letter will show the type of file.</p>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/626a2463-2d46-4a17-be7b-bbbb06bde7e4/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230226%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230226T201548Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=ae092d02b60c341e2b28712b513cbb707ae526c62bf2b5bf6038ad5e162ef4e0&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<p>Below is the list of file types:</p>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/fb408235-5484-4bba-9442-a05ee076131a/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230226%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230226T201515Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=877288c8b4ece9ef79fa79ab6155aa3319da994a73a2534ec5c7c0be555f348f&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<h3 id="heading-file-system-hierarchy">File system Hierarchy :</h3>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/82079061-640b-4709-b417-aaefdef6913d/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230226%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230226T201623Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=5f3c6be5fd4f20c948379d41f5cca639e19f5370e448923ed06b640ae2406e81&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<p>The Linux File System Hierarchy Standard (FSH) defines the organization of directories and files in a Linux-based operating system. The following is an overview of the key directories and their uses:</p>
<ol>
<li><p>/ : This is the root directory of the file system hierarchy, which contains all other directories and files. It is the first directory that is accessed when the system boots up.</p>
</li>
<li><p>/bin : This directory contains essential command-line utilities that are necessary for booting and repairing the system, such as ls, cp, and mv.</p>
</li>
<li><p>/boot : This directory contains the files needed to boot the system, including the kernel and boot loader files.</p>
</li>
<li><p>/dev : This directory contains device files for all the hardware devices connected to the system, such as hard drives, USB drives, and printers.</p>
</li>
<li><p>/etc : This directory contains configuration files for the system and its applications, such as network configuration, user authentication, and system startup scripts.</p>
</li>
<li><p>/home : This directory contains the home directories for all the system users, where they can store their personal files and data.</p>
</li>
<li><p>/lib : This directory contains the shared libraries that are needed by the system and its applications.</p>
</li>
<li><p>/media : This directory contains the mount points for removable media, such as USB drives and CDs.</p>
</li>
<li><p>/mnt : This directory contains the mount points for file systems that are mounted temporarily, such as network file systems.</p>
</li>
<li><p>/opt : This directory is used for installing third-party applications that are not part of the core system.</p>
</li>
<li><p>/proc : This directory contains virtual files that provide information about the system and its processes, such as system memory, CPU usage, and running processes.</p>
</li>
<li><p>/root : This directory is the home directory for the root user.</p>
</li>
<li><p>/run : This directory contains files that are created at runtime, such as process IDs and socket files.</p>
</li>
<li><p>/sbin : This directory contains essential system administration utilities, such as mount and shutdown.</p>
</li>
<li><p>/srv : This directory is used for storing data for services provided by the system, such as web server data or FTP server data.</p>
</li>
<li><p>/tmp : This directory is used for temporary file storage.</p>
</li>
<li><p>/usr : This directory contains user applications and their data, such as libraries, documentation, and binaries.</p>
</li>
<li><p>/var : This directory contains files that are expected to change in size or content over time, such as system logs, mail spools, and cached files.</p>
</li>
</ol>
<p>Understanding the Linux File System Hierarchy and their uses is essential for managing a Linux-based operating system. It helps to locate files, diagnose problems, and make sure that the system is functioning correctly.</p>
<h3 id="heading-package-management"><strong>Package Management:</strong></h3>
<p>In Linux, a package is a compressed archive file that contains all the files necessary to install and run an application or software library. Packages are used to simplify the process of installing and managing software on a Linux system. They typically include the application or library files, as well as any configuration files, documentation, and dependencies required by the software. Package management tools such as apt, yum, and pacman are used to download, install, and manage packages on a Linux system.</p>
<p>We have to always take care of Linux Package managers dependencies and their own dependencies.</p>
<p>Functions of Package Managers:</p>
<ol>
<li><p>Package Integrity and Authenticity</p>
</li>
<li><p>Simplified Package Management</p>
</li>
<li><p>Grouping Packages</p>
</li>
<li><p>Manage Dependencies</p>
</li>
</ol>
<p>RPM and Debian based Linux Distributions, I have listed only three , there are many more.</p>
<p>RPM-based distributions:</p>
<ol>
<li><p>Red Hat Enterprise Linux (RHEL) - RHEL is a commercial Linux distribution known for its stability and security. It is widely used in enterprise environments.</p>
</li>
<li><p>Fedora - Fedora is a community-driven distribution sponsored by Red Hat. It is known for its cutting-edge features and frequent updates.</p>
</li>
<li><p>CentOS - CentOS is a free and open-source distribution that is based on RHEL. It is commonly used for web servers, databases, and other enterprise applications.</p>
</li>
</ol>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/b4fd998f-3f02-4121-90db-ca6f2b33523f/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230226%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230226T201753Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=934da729211ed51aa8cbbf17c5c9d9c2dbb2e226316221249bedfd3aff3c9987&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<p>Debian-based distributions:</p>
<ol>
<li><p>Debian - Debian is a community-driven distribution known for its stability and adherence to the Unix philosophy. It is widely used in server environments.</p>
</li>
<li><p>Ubuntu - Ubuntu is a popular distribution that is based on Debian. It is known for its ease of use and strong community support.</p>
</li>
<li><p>Linux Mint - Linux Mint is a distribution that is based on Ubuntu. It is designed to be easy to use and includes many multimedia codecs and proprietary drivers.</p>
</li>
</ol>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/3e44fae6-e019-465e-a3fd-b3e1e611f45b/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230226%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230226T201852Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=a0d2d28c4ca98900139c8fe4249cdf36fa5b6ce218297b42192474c0272ecfda&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<p>Common Package Managers:</p>
<ul>
<li><p>DPKG: For Debian based distributions</p>
</li>
<li><p>APT: A newer frontend for DPKG system found in base distribution like Ubuntu, Linux Mint.</p>
</li>
<li><p>APT-GET: Traditions frontend system found in DPKG system.</p>
</li>
<li><p>RPM: Base package manager for RedHat based distribution, such RedHat Enterprise Linux, Cent-OS, Fedora.</p>
</li>
<li><p>YUM: A frontend for RPM system, found in RedHat based distribution.</p>
</li>
<li><p>DNF: A more feature rich front-end for RPM system.</p>
</li>
</ul>
<h3 id="heading-working-with-rpm"><strong>Working with RPM:</strong></h3>
<ul>
<li>Installation : Below command is used for installation of package.</li>
</ul>
<pre><code class="lang-bash">[~]$ rpm -ivh telnet.rpm
</code></pre>
<p>where i stands for installation and v stands for verbose, it is used to print the detailed output of command.</p>
<ul>
<li>Uninstalling: Below command is used for uninstallation of packages.</li>
</ul>
<pre><code class="lang-bash">[~]$ rpm -e telnet.rpm
</code></pre>
<ul>
<li>Upgrade: Below command upgrades a package to a newer version.</li>
</ul>
<pre><code class="lang-bash">[~]$ rpm -Uvh telnet.rpm
</code></pre>
<ul>
<li>Query: To query and details about the installed package.</li>
</ul>
<pre><code class="lang-bash">[~]$ rpm -q telnet.rpm
</code></pre>
<ul>
<li>Verifying: With below commands we can verify if the package is installed from trusted and secure source.</li>
</ul>
<pre><code class="lang-bash">[~]$ rpm -Vf &lt;path to file&gt;
</code></pre>
<p>RPM is a package manager used in RedHat-based Linux distributions, such as RedHat Enterprise Linux, CentOS, and Fedora. RPM stands for "Red Hat Package Manager" and is used to download, install, and manage packages on a Linux system.</p>
<p>Here are some RPM commands that you can use to work with packages:</p>
<ul>
<li><p><code>rpm -qa</code>: This command lists all the packages that are installed on the system.</p>
</li>
<li><p><code>rpm -qf &lt;filename&gt;</code>: This command tells you which package a file belongs to.</p>
</li>
<li><p><code>rpm -qi &lt;package&gt;</code>: This command displays information about a package, such as its version, release, and description.</p>
</li>
<li><p><code>rpm -ql &lt;package&gt;</code>: This command lists all the files that are included in a package.</p>
</li>
<li><p><code>rpm -qR &lt;package&gt;</code>: This command lists the dependencies required by a package.</p>
</li>
<li><p><code>rpm -e &lt;package&gt;</code>: This command removes a package from the system.</p>
</li>
<li><p><code>rpm -U &lt;package&gt;</code>: This command upgrades a package to a newer version.</p>
</li>
<li><p><code>rpm -ivh &lt;filename&gt;</code>: This command installs a package from a file.</p>
</li>
</ul>
<p>When working with RPM packages, it is important to keep in mind the dependencies required by the package. RPM will automatically resolve and install any required dependencies, but it is important to ensure that all dependencies are met before installing a package.</p>
<p>In addition to RPM, there are other package managers available for managing packages on Linux systems, such as dpkg and APT for Debian-based distributions.</p>
<h3 id="heading-yum"><strong>YUM</strong></h3>
<p>YUM (Yellowdog Updater, Modified) is a package manager used in Red Hat-based Linux distributions, such as Red Hat Enterprise Linux, CentOS, and Fedora. YUM is similar to the RPM package manager, but it provides additional features such as automatic dependency resolution and the ability to manage software repositories.</p>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/dcf9afcb-6303-42c0-b8f8-e998f83678c7/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230226%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230226T202127Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=4b417df1ea5a58926876db4726865fd5a21459ad3f30d91d8ecaefd58e575a0d&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<p>YUM can be used to install, update, and remove packages on a Linux system. It is also used to manage software repositories, which are collections of software packages that are available for installation on the system.</p>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/dcf9afcb-6303-42c0-b8f8-e998f83678c7/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230225%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230225T121934Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=7baebb6587c3ac5b6a51b31c1da3ffe0ff0cc474ca75c11d287f8812984d53d7&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<p>Here are some YUM commands that you can use to work with packages:</p>
<ul>
<li><p><code>yum install &lt;package&gt;</code>: This command installs a package on the system.</p>
</li>
<li><p><code>yum update &lt;package&gt;</code>: This command updates a package to a newer version.</p>
</li>
<li><p><code>yum remove &lt;package&gt;</code>: This command removes a package from the system.</p>
</li>
<li><p><code>yum search &lt;package&gt;</code>: This command searches for a package in the available software repositories.</p>
</li>
<li><p><code>yum list &lt;package&gt;</code>: This command lists all the available versions of a package in the software repositories.</p>
</li>
<li><p><code>yum info &lt;package&gt;</code>: This command displays information about a package, such as its version, release, and description.</p>
</li>
<li><p><code>yum clean all</code>: This command clears the cached data used by YUM.</p>
</li>
<li><p><code>yum repolist</code> : This command will show all the repo’s added to the system.</p>
</li>
<li><p>sudo yum provides tcpdump: This command will show the name of the package.</p>
</li>
</ul>
<p>YUM also provides additional features such as the ability to manage software groups, which are collections of related packages, and the ability to manage software repositories and their associated GPG keys.</p>
<p>YUM is a powerful package manager that can simplify the process of installing and managing software on a Linux system. However, it is important to keep in mind the dependencies required by the packages being installed or updated, as well as the security implications of adding new software repositories to the system.</p>
<h3 id="heading-package-installation-through-yum"><strong>Package installation through YUM:</strong></h3>
<p>YUM (Yellowdog Updater, Modified) is a package manager that is used to install, update, and remove packages on Red Hat-based Linux distributions, such as Red Hat Enterprise Linux, CentOS, and Fedora. When installing a package with YUM, the package manager will first check to see if the package is available in the configured software repositories. If the package is available, YUM will download the package and any required dependencies from the repository and install them on the system.</p>
<p>Before downloading and installing the package, YUM will perform a dependency check to ensure that all required dependencies are met. If any dependencies are missing, YUM will prompt the user to install them before continuing with the package installation.</p>
<p>YUM also provides the ability to manage software groups, which are collections of related packages, and software repositories, which are collections of software packages that are available for installation on the system. When working with YUM, it is important to keep in mind the dependencies required by the packages being installed or updated, as well as the security implications of adding new software repositories to the system.</p>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/5629c6fa-8246-41fe-a9ff-ae94cfae1e6f/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230226%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230226T202222Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=46cd5d01a97aaed5e46c7a28cba827429cbd6c190f0c95146bc82d9e9338bfd8&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<p>To install a package using YUM, follow these steps:</p>
<ol>
<li><p>Update the package list by running <code>sudo yum update</code>.</p>
</li>
<li><p>Search for the package by running <code>sudo yum search &lt;package-name&gt;</code>.</p>
</li>
<li><p>Install the package by running <code>sudo yum install &lt;package-name&gt;</code>.</p>
</li>
<li><p>If the package has any dependencies, YUM will prompt you to confirm the installation of those dependencies. Type 'y' to confirm and proceed with the installation.</p>
</li>
<li><p>Once the package and its dependencies have been installed, YUM will display a message indicating that the installation was successful.</p>
</li>
</ol>
<p>It is important to note that YUM requires root privileges to install packages. Therefore, you must prefix all YUM commands with 'sudo'.</p>
<h3 id="heading-dpkg-debian-package-manager"><strong>DPKG: Debian Package Manager:</strong></h3>
<p>DPKG (Debian Package) is a package manager used in Debian-based Linux distributions, such as Debian, Ubuntu, and their derivatives. It is a low-level tool for handling software packages in a Debian system, and it is used to install, remove, and manage packages on a Debian-based system.</p>
<p>DPKG is a command-line tool that works in conjunction with other package management tools, such as APT (Advanced Package Tool), which is a higher-level package management tool. APT is a front-end to DPKG and provides a user-friendly interface to install, remove, and manage packages.</p>
<p>DPKG is responsible for handling the installation and removal of individual packages on a Debian-based system. It keeps track of installed packages, their dependencies, and their configuration files. It also provides tools for querying package information, including the version number, description, and dependencies of a package.</p>
<p>It is a low level package manager, similar to RPM it is used for below purpose. Package extension is .deb</p>
<ul>
<li><p>Installation/Upgrade: dpkg -i &lt;package_file_name&gt;</p>
</li>
<li><p>Uninstalling: dpkg -r &lt;package_file_name&gt;</p>
</li>
<li><p>List with version details: dpkg -l &lt;package_file_name&gt;</p>
</li>
<li><p>status: dpkg -s &lt;package_file_name&gt;</p>
</li>
<li><p>Verifying : dpkg -p &lt;package_file_name&gt;</p>
</li>
</ul>
<h3 id="heading-aptapt-get"><strong>APT/APT-GET :</strong></h3>
<p>APT (Advanced Package Tool) is a high-level package management system used in Debian-based Linux distributions, such as Debian, Ubuntu, and their derivatives. APT is a front-end to the lower-level package manager DPKG (Debian Package), and it provides a user-friendly interface for installing, updating, and removing packages on a Debian-based system.</p>
<p>APT uses a package database to keep track of installed packages and their dependencies, and it can automatically resolve dependencies and download packages from software repositories over the internet.</p>
<p>Commands:</p>
<ul>
<li><p>apt update: To refresh the repository , this command is used to get the package information from all available sources.</p>
</li>
<li><p>apt upgrade: To upgrade all the available packages .</p>
</li>
<li><p>apt edit-sources : To edit specific package and upgrade it</p>
</li>
<li><p>apt search &lt; file_name&gt; : To search a file</p>
</li>
<li><p>apt install: To install a package</p>
</li>
<li><p>apt remove: To remove the package</p>
</li>
<li><p>apt List : To list all the available packages</p>
</li>
</ul>
<p>APT has several command-line tools, with the most commonly used tool being APT-GET. Here are some useful APT commands:</p>
<ol>
<li><p><code>sudo apt-get update</code>: This command updates the package list from all enabled repositories. It should be run before attempting to install or update packages to ensure that the package information is up-to-date.</p>
</li>
<li><p><code>sudo apt-get upgrade</code>: This command upgrades all installed packages to their latest versions. It downloads and installs updated packages and their dependencies.</p>
</li>
<li><p><code>sudo apt-get install package_name</code>: This command installs a package on the system. It automatically resolves dependencies and downloads any required packages from the configured repositories.</p>
</li>
<li><p><code>sudo apt-get remove package_name</code>: This command removes a package from the system. It removes the package and any files associated with it, but it does not remove any dependencies that are no longer needed.</p>
</li>
<li><p><code>sudo apt-get autoremove</code>: This command removes packages that were installed as dependencies but are no longer needed by any other packages on the system.</p>
</li>
<li><p><code>sudo apt-get purge package_name</code>: This command removes a package from the system, including any configuration files associated with it.</p>
</li>
<li><p><code>sudo apt-get clean</code>: This command cleans the local repository of retrieved package files that are no longer needed.</p>
</li>
<li><p><code>sudo apt-get autoclean</code>: This command removes packages that are no longer available in the repositories and their associated files.</p>
</li>
<li><p><code>apt-cache search search_term</code>: This command searches the package database for packages that match the search term.</p>
</li>
<li><p><code>apt-cache show package_name</code>: This command shows information about a specific package, including its version number, description, and dependencies.</p>
</li>
<li><p><code>apt-cache policy package_name</code>: This command shows the installed and available versions of a package, as well as the repository it is installed from.</p>
</li>
</ol>
<p>To view file sizes in Linux, you can use the following commands:</p>
<ul>
<li><p><code>ls -lh</code>: Lists files in the current directory with sizes in human-readable format.</p>
</li>
<li><p><code>du -h &lt;directory&gt;</code>: Shows the disk usage of files in a directory, with sizes in human-readable format.</p>
</li>
<li><p><code>du -sh &lt;directory&gt;</code>: Shows the total disk usage of a directory, with size in human-readable format.</p>
</li>
<li><p><code>df -h</code>: Displays the disk space usage of all mounted file systems, with sizes in human-readable format.</p>
</li>
<li><p><code>du -sk</code> : shows the file or directory in KB</p>
</li>
</ul>
<h3 id="heading-archiving-files-compressing-and-uncompressing-files"><strong>Archiving files, Compressing and Uncompressing files.</strong></h3>
<p>Here are some Linux commands for archiving files, compressing and uncompressing files.</p>
<ul>
<li><code>tar</code>: Tar is a command-line utility used for archiving files and directories into a single file, called a tarball. Tar can also be used to extract files from a tarball. The basic syntax for creating a tarball is:</li>
</ul>
<pre><code class="lang-bash">tar -cvf archive.tar file1 file2 directory/
tar -cf test.tar file1 file2 file3 
ls -ltr test.tar
tar -tf test.tar : to see the contents of tarball
tar -xf test.tar : Extract the content from tarball 
tar -zcf test.tar file1 file2 file3 : To reduce the size
</code></pre>
<p>The <code>-c</code> option tells tar to create a new archive, the <code>-v</code> option enables verbose output, and the <code>-f</code> option specifies the name of the archive file. The files and directories to be included in the archive are listed at the end of the command.</p>
<p>To extract files from a tarball, use the following command:</p>
<pre><code class="lang-bash">tar -xvf archive.tar
</code></pre>
<p>The <code>-x</code> option tells tar to extract files from an archive.</p>
<ul>
<li><code>gzip</code>: Gzip is a command-line utility used for compressing files. Gzip compresses files using the Lempel-Ziv algorithm, and can reduce file size by up to 90%. The basic syntax for compressing a file with gzip is:</li>
</ul>
<pre><code class="lang-bash">gzip file.txt
</code></pre>
<p>This will create a compressed file called <code>file.txt.gz</code>.</p>
<p>To extract a compressed file, use the following command:</p>
<pre><code class="lang-bash">gzip -d file.txt.gz
</code></pre>
<p>The <code>-d</code> option tells gzip to decompress the file.</p>
<ul>
<li><code>zip</code>: Zip is a command-line utility used for archiving files and directories into a single compressed file. Zip is similar to tar, but uses a different compression algorithm. The basic syntax for creating a zip file is:</li>
</ul>
<pre><code class="lang-bash">zip archive.zip file1 file2 directory/
</code></pre>
<p>The files and directories to be included in the archive are listed at the end of the command.</p>
<p>To extract files from a zip file, use the following command:</p>
<pre><code class="lang-bash">unzip archive.zip
</code></pre>
<ul>
<li><code>bzip2</code>: Bzip2 is a command-line utility used for compressing files. Bzip2 uses the Burrows-Wheeler transform algorithm, and can provide better compression than gzip. The basic syntax for compressing a file with bzip2 is:</li>
</ul>
<pre><code class="lang-bash">bzip2 file.txt
</code></pre>
<p>This will create a compressed file called <code>file.txt.bz2</code>.</p>
<p>To extract a compressed file, use the following command:</p>
<pre><code class="lang-bash">bzip2 -d file.txt.bz2
</code></pre>
<p>The <code>-d</code> option tells bzip2 to decompress the file.</p>
<ul>
<li><code>x**z</code>:**The <code>xz</code> command in Linux is used for file compression and decompression. It uses the LZMA algorithm to compress files, which can result in higher compression ratios than other compression algorithms such as gzip and bzip2. The basic syntax for compressing a file with xz is:</li>
</ul>
<pre><code class="lang-bash">xz file.txt
</code></pre>
<p>This will create a compressed file called <code>file.txt.xz</code>.</p>
<p>To extract a compressed file, use the following command:</p>
<pre><code class="lang-bash">xz -d file.txt.xz
</code></pre>
<p>The <code>-d</code> option tells xz to decompress the file.</p>
<p>As per the standards, there is no need to uncompress the compressed files every-time. Below tools allows us to read the files without uncompressing them based on the type of file.</p>
<ol>
<li><p>zcat</p>
</li>
<li><p>bzcat</p>
</li>
<li><p>xzcat</p>
</li>
</ol>
<pre><code class="lang-bash">zcat hostfile.text.bz2
</code></pre>
<p>Searching for Files and Directories in Linux File system.</p>
<p>Here are some commands for searching for files and directories in the Linux file system:</p>
<ul>
<li><code>find</code>: Find is a powerful command-line utility used to search for files and directories in a file system. The basic syntax for finding files is:</li>
</ul>
<pre><code class="lang-bash">find /path/to/search -name <span class="hljs-string">"filename"</span>
</code></pre>
<p>This will search for files with the name "filename" in the directory /path/to/search and its subdirectories. You can also use wildcards to search for files with names that match a pattern, such as:</p>
<pre><code class="lang-bash">find /path/to/search -name <span class="hljs-string">"*.txt"</span>
</code></pre>
<p>This will search for all files with the extension ".txt" in the directory /path/to/search and its subdirectories.</p>
<ul>
<li><code>locate</code>: Locate is another command-line utility used to search for files and directories in a file system. It is faster than the find command, but it relies on a pre-built index of the file system, so it may not be as up-to-date as the find command. To get the latest output we have to run the below command before locate command. The basic syntax for using locate is:</li>
</ul>
<pre><code class="lang-bash">updatedb : It will update the database of FS.
locate filename
</code></pre>
<p>P.S: We have to run the update commands as root user.</p>
<p>This will search for files with the name "filename" in the entire file system. You can also use wildcards to search for files with names that match a pattern, such as:</p>
<pre><code class="lang-bash">locate *.txt
</code></pre>
<p>This will search for all files with the extension ".txt" in the entire file system.</p>
<ul>
<li><p><code>grep</code>: Grep is a command-line utility used to search for text within files. The basic syntax for using grep is:</p>
</li>
<li><p>The <code>-i</code> flag in <code>grep</code> is used to perform case-insensitive searches. This means that the search term will match both uppercase and lowercase versions of the letters in the search term. For example, the command <code>grep -i "example" file.txt</code> will match both "example" and "EXAMPLE" in the file <code>file.txt</code>.</p>
</li>
<li><p>The <code>-r</code> flag in <code>grep</code> is used to perform a recursive search through directories. This means that <code>grep</code> will search through all files in the specified directory and its subdirectories. For example, the command <code>grep -r "example" /path/to/directory</code> will search for the string "example" in all files in the directory <code>/path/to/directory</code> and its subdirectories.</p>
</li>
<li><p><code>v</code>: This option tells <code>grep</code> to invert the match, meaning that it will only display lines that do not match the search term. For example, the command <code>grep -v "example" file.txt</code> will display all lines in <code>file.txt</code> that do not contain the word "example".</p>
</li>
<li><p><code>w</code>: This option tells <code>grep</code> to match only whole words. For example, the command <code>grep -w "example" file.txt</code> will only match lines that contain the word "example" as a whole word, and not as part of another word such as "examples".</p>
</li>
<li><p><code>i</code>: This option tells <code>grep</code> to perform a case-insensitive search. This means that the search term will match both uppercase and lowercase versions of the letters in the search term. For example, the command <code>grep -i "example" file.txt</code> will match both "example" and "EXAMPLE" in the file <code>file.txt</code>.</p>
</li>
<li><p><code>A</code>: This option tells <code>grep</code> to display <code>n</code> lines of context after the match. For example, the command <code>grep -A 2 "example" file.txt</code> will display the line containing the match as well as the two lines that follow it.</p>
</li>
<li><p><code>B</code>: This option tells <code>grep</code> to display <code>n</code> lines of context before the match. For example, the command <code>grep -B 2 "example" file.txt</code> will display the line containing the match as well as the two lines that come before it.</p>
</li>
</ul>
<pre><code class="lang-bash">grep <span class="hljs-string">"search term"</span> /path/to/file
</code></pre>
<p>This will search for the string "search term" in the file /path/to/file. You can also use wildcards to search for text within multiple files, such as:</p>
<pre><code class="lang-bash">grep <span class="hljs-string">"search term"</span> /path/to/directory/*
</code></pre>
<p>This will search for the string "search term" in all files in the directory /path/to/directory.</p>
<ul>
<li><code>which</code>: Which is a command-line utility used to locate the binary file of a command. The basic syntax for using which is:</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-built_in">which</span> <span class="hljs-built_in">command</span>
</code></pre>
<p>This will display the location of the binary file for the specified command.</p>
<ul>
<li><code>whereis</code>: Whereis is a command-line utility used to locate the binary, source, and manual page files for a command. The basic syntax for using whereis is:</li>
</ul>
<pre><code class="lang-bash">whereis <span class="hljs-built_in">command</span>
</code></pre>
<p>This will display the location of the binary, source, and manual page files for the specified command.</p>
<h3 id="heading-io-redirection"><strong>IO Redirection:</strong></h3>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/6b6b4243-97f2-4f50-8be0-2dcbde7aad15/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230226%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230226T202319Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=3e04f3edf00be785e425b55ce5d5597f66655ed0268fba9170913e926a86c773&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<p>Input/output (IO) redirection is a powerful feature of the Linux command line that allows you to redirect the input or output of a command to a file or another command. This can be useful for a variety of tasks, such as logging command output to a file, processing the output of a command in another command, or reading input from a file instead of the terminal.</p>
<p>Here are some common IO redirection operators in Linux:</p>
<ul>
<li><p><code>&gt;</code>: This operator redirects the output of a command to a file, overwriting the existing contents of the file. For example, the command <code>ls &gt; file.txt</code> will write the output of the <code>ls</code> command to the file <code>file.txt</code>. If the file does not exist, it will be created. If the file already exists, its contents will be overwritten.</p>
</li>
<li><p><code>&gt;&gt;</code>: This operator redirects the output of a command to a file, appending the output to the end of the file. For example, the command <code>ls &gt;&gt; file.txt</code> will append the output of the <code>ls</code> command to the end of the file <code>file.txt</code>. If the file does not exist, it will be created.</p>
</li>
<li><p><code>&lt;</code>: This operator redirects the input of a command from a file, instead of the terminal. For example, the command <code>sort &lt; file.txt</code> will sort the contents of the file <code>file.txt</code>, using the file as input instead of the terminal.</p>
</li>
<li><p><code>|</code>: This operator redirects the output of one command to the input of another command. For example, the command <code>ls | grep .txt</code> will list all files in the current directory and pipe the output to the <code>grep</code> command, which will filter the output to show only files with the extension <code>.txt</code>.</p>
</li>
</ul>
<p>Here are some examples of using IO redirection in Linux:</p>
<ul>
<li><p><code>ls &gt; file.txt</code>: This command writes the output of the <code>ls</code> command to the file <code>file.txt</code>.</p>
</li>
<li><p><code>echo "Hello, world!" &gt;&gt; file.txt</code>: This command appends the string "Hello, world!" to the end of the file <code>file.txt</code>.</p>
</li>
<li><p><code>sort &lt; file.txt &gt; sorted.txt</code>: This command sorts the contents of the file <code>file.txt</code> and writes the sorted output to the file <code>sorted.txt</code>.</p>
</li>
<li><p><code>cat file.txt | grep "Hello"</code>: This command reads the contents of the file <code>file.txt</code>, pipes the output to the <code>grep</code> command, and filters the output to show only lines containing the string "Hello".</p>
</li>
</ul>
<p>IO redirection can be a powerful tool for managing and processing data on the Linux command line. By redirecting input and output, you can perform a wide range of tasks with ease.</p>
<p>The <code>tee</code> command in Linux allows you to redirect the output of a command to a file and also display it on the terminal at the same time.</p>
<p>The basic syntax for using the <code>tee</code> command is:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">command</span> | tee file.txt
</code></pre>
<p>This will execute the command and redirect the output to both the file <code>file.txt</code> and the terminal.</p>
<p>You can also append the output to an existing file using the <code>-a</code> option:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">command</span> | tee -a file.txt
</code></pre>
<p>This will append the output of the command to the end of the file <code>file.txt</code>.</p>
<p>The <code>tee</code> command can be useful for logging output to a file while still being able to see it on the terminal in real-time. It can also be used in conjunction with other commands to perform complex operations on data.</p>
<h3 id="heading-vi-editor"><strong>VI editor:</strong></h3>
<p>Vi is a powerful text editor that is widely used on Linux and other Unix-like operating systems. It is a command-line based editor that allows users to create, edit, and save text files from within a terminal window.</p>
<p>Vi has a reputation for being difficult to use, with a steep learning curve. However, once users become familiar with its commands and modes, they find that Vi is a highly efficient tool for editing text.</p>
<p>VI editor is a popular text editor in Linux and Unix systems. Here's a list of some of the most commonly used commands in VI editor:</p>
<ol>
<li><p><strong>i</strong> - Switch to Insert mode, allowing you to insert text.</p>
</li>
<li><p><strong>esc</strong> - Return to command mode from Insert mode.</p>
</li>
<li><p><strong>x</strong> - Delete the character under the cursor.</p>
</li>
<li><p><strong>dd</strong> - Delete the current line.</p>
</li>
<li><p><strong>d3d</strong> - Delete 3 line from current line</p>
</li>
<li><p><strong>u</strong> - To undo the previous change</p>
</li>
<li><p><strong>:w</strong> - Save the current file.</p>
</li>
<li><p><strong>:q</strong> - Quit the editor.</p>
</li>
<li><p><strong>:wq</strong> - Save the current file and quit the editor.</p>
</li>
<li><p><strong>:q!</strong> - Quit the editor without saving changes.</p>
</li>
<li><p><strong>yy</strong> - Yank (copy) the current line.</p>
</li>
<li><p><strong>p</strong> - Paste the contents of the clipboard after the cursor.</p>
</li>
<li><p><strong>/searchterm</strong> - Search for the specified term in the file.</p>
</li>
<li><p><strong>n</strong> - Find the next occurrence of the search term.</p>
</li>
<li><p><strong>:set number</strong> - Display line numbers.</p>
</li>
<li><p><strong>:set nonumber</strong> - Hide line numbers.</p>
</li>
<li><p><strong>:set syntax=language</strong> - Set syntax highlighting for a specific programming language.</p>
</li>
<li><p><strong>:set tabstop=n</strong> - Set the tab width to n spaces.</p>
</li>
<li><p><strong>:set expandtab</strong> - Use spaces instead of tabs for indentation.</p>
</li>
<li><p><strong>:set noexpandtab</strong> - Use tabs instead of spaces for indentation.</p>
</li>
<li><p><strong>:set autoindent</strong> - Automatically indent new lines to match the previous line.</p>
</li>
<li><p><strong>:set nowrap</strong> - Disable line wrapping.</p>
</li>
</ol>
<p>These are just a few of the many commands available in VI editor. You can find more commands by typing <strong>:help</strong> in command mode to see the full list of commands and their descriptions.</p>
<p><strong>THANK YOU FOR YOUR VALUABLE TIME !!</strong></p>
<p>Do follow DevOps Community for more 💙</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="http://blog.devopscommunity.in">http://blog.devopscommunity.in</a></div>
]]></content:encoded></item><item><title><![CDATA[Kubernetes Powered Streaming]]></title><description><![CDATA[What is Streaming?
Streaming is a technology that allows people to access and view digital media, such as audio, video, or games, in real-time, over the internet. Unlike traditional methods of delivering media, such as downloading a file or watching ...]]></description><link>https://blog.devopscommunity.in/kubernetes-powered-streaming</link><guid isPermaLink="true">https://blog.devopscommunity.in/kubernetes-powered-streaming</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[streaming]]></category><category><![CDATA[WeMakeDevs]]></category><category><![CDATA[Apache Kafka]]></category><category><![CDATA[apache pulsar]]></category><dc:creator><![CDATA[Sajjan Yadav]]></dc:creator><pubDate>Thu, 16 Feb 2023 05:26:13 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676214867118/a4021741-c597-4625-8e07-2927265750aa.webp" alt class="image--center mx-auto" /></p>
<h1 id="heading-what-is-streaming">What is Streaming?</h1>
<p>Streaming is a technology that allows people to access and view digital media, such as audio, video, or games, in real-time, over the internet. Unlike traditional methods of delivering media, such as downloading a file or watching a video on demand, streaming sends media in small packets of data that are played immediately as they arrive, instead of being stored on the viewer's device first. This means that the viewer can start watching or listening to the content almost immediately, without having to wait for the entire file to be downloaded.</p>
<h1 id="heading-where-is-streaming-used">Where is Streaming Used?</h1>
<ul>
<li><p><strong>Online Video and Music Services</strong>: Platforms like Netflix, Hulu, YouTube, and Spotify use streaming to deliver movies, TV shows, music, and other multimedia content to their customers.​</p>
</li>
<li><p><strong>Live Broadcasts</strong>: Streaming is used to deliver live events, such as concerts, sports events, and news broadcasts, over the internet.​</p>
</li>
<li><p><strong>Gaming</strong>: Online gaming services use streaming to deliver high-quality, low-latency gaming experiences to users.​</p>
</li>
<li><p><strong>Corporate Communications</strong>: Companies use streaming for internal and external communications, such as video conferences, webinars, and online training sessions.​</p>
</li>
<li><p><strong>Social Media</strong>: Social media platforms like Facebook and Twitter use streaming to deliver live videos to users.​</p>
</li>
</ul>
<h1 id="heading-types-of-streaming">Types of Streaming?</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676215575288/524beebe-db95-4747-ad4c-031830629c61.png" alt class="image--center mx-auto" /></p>
<p>There are several types of Streaming, Some major Streaming types are:-</p>
<ol>
<li><p><strong>Video Streaming</strong>: This is the most common type of streaming and involves the delivery of video content over the internet, such as movies, TV shows, or live events.</p>
</li>
<li><p><strong>Audio Streaming</strong>: This type of streaming involves the delivery of audio content, such as music, podcasts, or radio shows, over the internet.</p>
</li>
<li><p><strong>Gaming Streaming</strong>: This type of streaming involves playing video games and broadcasting the gameplay to an online audience.</p>
</li>
<li><p><strong>Screen Streaming</strong>: This type of streaming involves broadcasting a computer screen or mobile device screen to an online audience, often used for presentations or tutorials.</p>
</li>
<li><p><strong>Event Streaming</strong>: This type of streaming involves broadcasting live events, such as concerts, sports events, or political rallies, over the internet.</p>
</li>
<li><p><strong>Social Media Streaming</strong>: This type of streaming involves broadcasting live video content through social media platforms, such as Facebook Live or Instagram Live.</p>
</li>
<li><p><strong>VR Streaming</strong>: This type of streaming involves delivering virtual reality content over the internet, allowing users to experience immersive environments and interact with digital objects in real time.</p>
</li>
</ol>
<h1 id="heading-what-are-streaming-workloads">What are Streaming Workloads?</h1>
<ul>
<li><p>Streaming workloads refer to the processing and management of real-time media data in a streaming environment. This can include tasks such as encoding and transcoding media, delivering content to end-users, and collecting and analyzing metrics on usage and performance. ​</p>
</li>
<li><p>These workloads are typically used in scenarios where low-latency and high-throughput communication is required, such as in financial trading systems, social media platforms, and internet-of-things (IoT) systems. ​</p>
</li>
<li><p>Examples of streaming technologies include Apache Kafka, RedPanda, and Apache Pulsar.​</p>
</li>
</ul>
<h1 id="heading-challenges-of-processing-streaming-data"><strong>Challenges of Processing Streaming Data</strong></h1>
<ul>
<li><p><strong>Data Velocity</strong>: Streaming data can arrive at high speeds, making it difficult to process and analyze in real time.​</p>
</li>
<li><p><strong>Data Volume</strong>: The amount of streaming data can be overwhelming, making it challenging to store and process large amounts of information in real time.​</p>
</li>
<li><p><strong>Scalability</strong>: Processing streaming data can require significant computing resources, making it challenging to scale systems to handle increased volume and velocity.​</p>
</li>
<li><p><strong>Latency</strong>: The real-time nature of streaming data requires low latency processing, making it challenging to balance the need for real-time processing with the need for accuracy.​</p>
</li>
<li><p><strong>Security and Privacy</strong>: Protecting the privacy and security of streaming data can be challenging, particularly as the data can be sensitive and personal.​</p>
</li>
</ul>
<h1 id="heading-what-is-kubernetes">What is Kubernetes?</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676216593438/32dde55d-aaad-4fe4-9d35-9f34e2b62331.webp" alt class="image--center mx-auto" /></p>
<p>K8s is shorthand for "Kubernetes," which is an open-source platform for automating deployment, scaling, and management of containerized applications</p>
<p><strong>NOTE: If you want to know more about Kubernetes then Please refer to my Blog on Kubernetes.</strong></p>
<h2 id="heading-how-do-k8s-come-into-the-picture">How do K8s come into the Picture?</h2>
<p>Kubernetes (K8s) was introduced into the world of streaming to address several key challenges and needs in the real-time data processing space, including:​</p>
<ul>
<li><p><strong>Resource Management</strong>: Streaming platforms require a large amount of computing resources to process real-time data. K8s provides a centralized platform for managing and allocating these resources, enabling organizations to ensure that they have the necessary resources to meet their real-time processing needs.​</p>
</li>
<li><p><strong>Scalability</strong>: K8s allows for horizontal scaling of streaming applications, making it possible to add more resources to the system as the number of users and the amount of data increases.​</p>
</li>
<li><p><strong>Automation</strong>: Managing complex real-time data processing platforms can be time-consuming and error-prone. K8s provides a high level of automation, enabling organizations to deploy and manage their streaming platforms more efficiently.​</p>
</li>
<li><p><strong>High Availability</strong>: Streaming platforms must be highly available to ensure that real-time data is processed in a timely manner. K8s provides built-in mechanisms for ensuring high availability, including automatic failover and recovery.​</p>
</li>
<li><p><strong>Portability</strong>: Streaming platforms are often deployed on a variety of infrastructures, including on-premises, public cloud, and hybrid cloud environments. K8s provides a common platform for deploying and managing streaming platforms across different infrastructures, enabling organizations to choose the best-fit infrastructure for their specific needs​</p>
</li>
</ul>
<h2 id="heading-popular-streaming-technologies">Popular Streaming Technologies</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676217015915/68c41b54-aee5-4fea-a265-a0eefa23ada2.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Apache Kafka</strong>: Apache Kafka is an open-source, distributed streaming platform that is widely used for ingesting, processing, and distributing real-time data. Apache Kafka can be deployed and managed using K8s using tools such as Strimzi, a Kubernetes operator for Apache Kafka.​</p>
</li>
<li><p><strong>Apache Flink</strong>: Apache Flink is an open-source, distributed stream processing framework that can be used to build complex, large-scale streaming applications. Apache Flink can be deployed and managed using K8s using tools such as Flink Operator, a Kubernetes operator for Apache Flink.​</p>
</li>
<li><p><strong>Redpanda</strong>: Redpanda is a distributed, real-time data streaming and processing platform. It was designed to provide high-throughput, low-latency data processing for time-sensitive use cases, such as financial trading and IoT applications.​</p>
</li>
<li><p><strong>Apache Pulsar</strong>: Apache Pulsar is an open-source, distributed streaming platform that can be used for ingesting, processing, and distributing real-time data. Apache Pulsar can be deployed and managed using K8s using tools such as Pulsar Operator, a Kubernetes operator for Apache Pulsar.​</p>
</li>
</ol>
<p>and many more I have only mentioned 4.</p>
<h1 id="heading-apache-kafka-and-apache-pulsar-building-blocks-for-real-time-data-streaming-platforms"><strong>Apache Kafka and Apache Pulsar: Building Blocks for Real-Time Data Streaming Platforms</strong></h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676488743207/910459e2-32a3-4718-bfec-36cd1a33394d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-kafka">​Kafka:​</h2>
<ul>
<li><p>LinkedIn: One of the earliest adopters of Apache Kafka, LinkedIn uses it to handle its real-time data streams and activity data.​</p>
</li>
<li><p>Netflix: Netflix uses Apache Kafka as a central repository for its application and infrastructure log data.​</p>
</li>
<li><p>Uber: Uber, the ride-hailing company, uses Apache Kafka to ingest, process, and distribute real-time data from multiple sources, such as driver locations, ride requests, and payment transactions.​</p>
</li>
</ul>
<h2 id="heading-pulsar">Pulsar:</h2>
<ul>
<li><p>Yahoo: Yahoo uses Apache Pulsar to power its data pipeline for online advertising.​</p>
</li>
<li><p><a target="_blank" href="http://Booking.com">Booking.com</a>: <a target="_blank" href="http://Booking.com">Booking.com</a> uses Apache Pulsar to handle real-time data streams from its customer's interactions with the website.​</p>
</li>
<li><p>Twitter: Twitter, the social networking platform for microblogging, uses Apache Pulsar to ingest, process, and distribute real-time data from multiple sources, such as user behaviour, content creation, and advertising.​</p>
</li>
</ul>
<h1 id="heading-best-practices-for-running-streaming-workloads-on-k8s"><strong>Best Practices For Running Streaming Workloads On K8s</strong></h1>
<ol>
<li><p><strong>Resource Management</strong>: Allocate resources like CPU, memory, and storage effectively to ensure that the containers running the streaming workloads have enough resources to run efficiently.</p>
</li>
<li><p><strong>Networking</strong>: Make sure that the network is optimized for low latency and high bandwidth to minimize data transfer time between the containers.</p>
</li>
<li><p><strong>Scalability</strong>: Implement auto-scaling to ensure that the number of containers can be increased or decreased based on the load on the system. This can be done using Kubernetes Horizontal Pod Autoscaler (HPA).</p>
</li>
<li><p><strong>Data Persistence</strong>: Store the data generated by the streaming workloads persistently to ensure that the data is not lost in case of any failures. Kubernetes Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) can be used for this.</p>
</li>
<li><p><strong>Security</strong>: Implement security measures like network segmentation, encryption, and access control to secure the sensitive data being processed by the streaming workloads.</p>
</li>
<li><p><strong>Monitoring and Logging</strong>: Monitor the performance and behaviour of the streaming workloads to quickly identify and resolve any issues. Kubernetes provides several tools for monitoring, such as Prometheus, Grafana, and Fluentd, which can be used for this.</p>
</li>
<li><p><strong>Versioning</strong>: Maintain multiple versions of the streaming workloads and roll out updates and upgrades to the system in a controlled manner using Kubernetes Deployments.</p>
</li>
<li><p><strong>High Availability</strong>: Ensure that the streaming workloads are highly available to avoid downtime. Kubernetes provides features like pod anti-affinity and replicas to achieve high availability.</p>
</li>
</ol>
<h1 id="heading-future-of-streaming-and-streaming-platforms">Future Of Streaming And Streaming Platforms</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676489083773/36d0e322-4646-4629-90c7-b6641d7923e7.jpeg" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Edge Computing</strong>: With the growth of IoT and the increasing number of connected devices, edge computing will become increasingly important for streaming platforms. Edge computing allows data to be processed closer to the source, reducing the latency and bandwidth requirements of transmitting large amounts of data to centralized data centres.​</p>
</li>
<li><p><strong>5G Networks</strong>: The widespread adoption of 5G networks is likely to drive the growth of streaming platforms. 5G networks offer higher bandwidth, lower latency, and more reliable connections, enabling new use cases for real-time data processing and distribution.​</p>
</li>
<li><p><strong>Artificial Intelligence and Machine Learning</strong>: Artificial intelligence and machine learning will play an increasingly important role in streaming platforms. AI and ML can be used to analyze and make sense of large amounts of real-time data, enabling new use cases for streaming platforms such as predictive maintenance, fraud detection, and real-time personalization.​</p>
</li>
<li><p><strong>Cloud Native</strong>: The trend towards cloud-native architectures and containerized applications will continue to drive the development of streaming platforms. Cloud-native architectures offer scalable and flexible infrastructure, making it easier to deploy and manage large-scale streaming platforms.​</p>
</li>
<li><p><strong>Multi-Cloud and Hybrid Cloud</strong>: The trend towards multi-cloud and hybrid cloud environments is likely to shape the future of streaming platforms. Streaming platforms will need to be able to operate seamlessly across different cloud environments, enabling organizations to take advantage of the best-fit infrastructure for their specific needs.​</p>
</li>
</ul>
<h2 id="heading-statistics-of-streaming-workloads-running-over-k8s">Statistics of Streaming workloads running over K8s</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676489670443/2790e96f-a367-453f-bfdc-c87411876f99.png" alt class="image--center mx-auto" /></p>
<p>This is data from the DoKC research report 2022 which clearly shows that 39% of the data running on K8s is Streaming/Messaging workloads of which 48% are leaders.</p>
<h1 id="heading-resources">RESOURCES</h1>
<ol>
<li><p><a target="_blank" href="https://dok.community/data-on-kubernetes-2022-report/">DoKC Report</a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=ywYnG35wAt8">Data Streaming</a></p>
</li>
<li><p><a target="_blank" href="https://cloudinfrastructureservices.co.uk/kafka-vs-pulsar-whats-the-difference/#:~:text=In%20Kafka%2C%20each%20medium%20uses,from%20the%20data%20storage%20layer">Kafka vs Pulsar</a></p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Dockerfile Simplified!
How to make your own Dockerfile?]]></title><description><![CDATA[Why would you need to create your own image:-
Either you and your team decided to dockerize the application you're building for ease of shipping and deployment, or you were unable to locate the part or service you wanted to use for your application o...]]></description><link>https://blog.devopscommunity.in/dockerfile-simplified-how-to-make-your-own-dockerfile</link><guid isPermaLink="true">https://blog.devopscommunity.in/dockerfile-simplified-how-to-make-your-own-dockerfile</guid><category><![CDATA[Beginner Developers]]></category><category><![CDATA[Docker]]></category><category><![CDATA[beginner]]></category><category><![CDATA[Devops]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Sachin Sharma]]></dc:creator><pubDate>Sun, 12 Feb 2023 04:27:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1676147435805/d2a0e860-9bb7-44e0-a136-5aad9ba9e380.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675335328593/6b59da8f-a233-40c0-ad0f-0b23d1080a48.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-why-would-you-need-to-create-your-own-image">Why would you need to create your own image:-</h2>
<p>Either you and your team decided to dockerize the application you're building for ease of shipping and deployment, or you were unable to locate the part or service you wanted to use for your application on DockerHub.</p>
<h2 id="heading-creating-your-own-image">Creating your own image:-</h2>
<p>First, we need to understand what we are containerizing, what application we are creating an image for, and how the application is built.</p>
<p>Step 1:- Create a file with the name "Dockerfile" in the project folder.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675339955395/5d32dacc-5928-456d-ba6c-b71fd3032697.png" alt /></p>
<p>Step 2:- In the Dockerfile a simple instruction and argument approach is followed.</p>
<p><code>[instruction] [argument]</code></p>
<p>All the instructions are on the left and are typed in all caps. Each of these instructions instructs Docker to perform a certain action while creating the image.</p>
<p>Let's start from the beginning of the Dockerfile.</p>
<p>The first line contains the base image. Every Docker image must be based on another image. This means the installation of the base image is what we require in our image. So that when the container of our image runs we can access all the commands associated with the base image.</p>
<p><code>FROM &lt;Base_Image&gt;</code></p>
<blockquote>
<p>"It is important that every Dockerfile must start with FROM instruction".</p>
</blockquote>
<p>Step 3:- In this step RUN instruction is used. We add all the commands which install and update dependencies that are essential for our project to execute.</p>
<p><code>RUN &lt;Command&gt;</code></p>
<p>Step 4:- Now we use the COPY instruction to copy files from the local system into the docker image. This is done because the project that we want to execute is in our local system and we want to make our project available to the docker image.</p>
<p><code>COPY &lt;Source&gt; &lt;Destination&gt;</code></p>
<p>Step 5:- This is the final step in the creation of our Dockerfile. This step involves CMD instruction which stands for command and defines the command that will be run within the container when it starts.</p>
<p><code>CMD [Command]</code></p>
<p>The final file should look something like this:-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674911099593/2cd29aa0-b0ac-4b68-9341-63b2e3143e1b.png" alt="Dockerfile" class="image--center mx-auto" /></p>
<p>Step 6:- Now that we've finally completed our Dockerfile. It's time to build it.</p>
<p>To build it we'll use <code>docker build -t my_app:1.0 &lt;Location_of_the_Dockerfile&gt;</code></p>
<p>Step 7:- The last step is to run the Docker image using <code>docker run my_app:1.0</code></p>
<p>Congratulation you have successfully built your own Dockerfile and now it is running on your system.</p>
<h2 id="heading-now-try-it-for-yourself-and-dockerize-any-project-you-want-to-all-the-best">Now try it for yourself and dockerize any project you want to. All the best.</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675335360158/10d4db42-3415-4c30-9336-3b18da6b9ce7.gif" alt class="image--center mx-auto" /></p>
<h1 id="heading-thank-you-for-reading-do-leave-a-like-or-a-comment-if-you-found-this-blog-to-be-helpful"><strong>Thank you for reading. Do leave a like or a comment if you found this blog to be helpful.</strong></h1>
<h1 id="heading-feedback-is-highly-appreciated"><strong>Feedback is highly appreciated.</strong></h1>
<p>Follow me on Twitter at <a target="_blank" href="https://twitter.com/sachintwts"><code>sachintwts</code></a> <strong>for more content like this.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Kubernetes Architecture and Components]]></title><description><![CDATA[What is Kubernetes?
K8s is shorthand for "Kubernetes," which is an open-source platform for automating deployment, scaling, and management of containerized applications. It provides a way to manage multiple containers as a single unit, known as a "po...]]></description><link>https://blog.devopscommunity.in/kubernetes-architecture-and-components</link><guid isPermaLink="true">https://blog.devopscommunity.in/kubernetes-architecture-and-components</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[WeMakeDevs]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Sajjan Yadav]]></dc:creator><pubDate>Sat, 11 Feb 2023 19:34:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1676126988955/3146465c-28d4-4599-a905-a3f659c5affb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-kubernetes"><strong>What is Kubernetes?</strong></h1>
<p>K8s is shorthand for "Kubernetes," which is an open-source platform for automating deployment, scaling, and management of containerized applications. It provides a way to manage multiple containers as a single unit, known as a "pod," and can be used to run, deploy, and manage containerized applications in a cluster of machines.</p>
<h1 id="heading-why-use-kubernetes">Why use Kubernetes?</h1>
<p>K8s help in many tasks, some of the key features of K8s are:-</p>
<ul>
<li><p><strong>Container orchestration</strong>: K8s enables the deployment, scaling, and management of containerized applications, making it easier for developers to build, deploy, and manage their applications in a production environment.​</p>
</li>
<li><p><strong>Automated scaling</strong>: K8s can automatically scale resources (e.g. CPU, memory, and storage) based on the needs of the applications, ensuring that resources are used efficiently and cost-effectively.​</p>
</li>
<li><p><strong>Load balancing</strong>: K8s can automatically distribute incoming traffic across multiple replicas of an application, providing high availability and resilience against failures.​</p>
</li>
<li><p><strong>Self-healing</strong>: K8s can automatically detect and recover from failures, ensuring that applications are highly available and always running.​</p>
</li>
<li><p><strong>Configuration management</strong>: K8s provides a centralized way to manage and store configuration information, making it easier to manage complex application deployments.​</p>
</li>
</ul>
<h1 id="heading-k8s-architecture">K8s Architecture</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676129687025/0447e6a1-31b1-414c-9634-dafa9c295074.jpeg" alt class="image--center mx-auto" /></p>
<p>K8s Architecture mainly consists of 2 components <strong>Control Plane</strong>(Also called Master Node before) and <strong>Worker nodes</strong>. We communicate with the Control Plane through the CLI of K8s called <strong>kubectl.</strong> Control Plane divided the traffic and schedules the pods, pods are situated in the worker node, So we can say the worker node is where the application is going to run and the control plane is going to manage the worker nodes. Now Let's Discuss Each Component of the Control Plane and Worker Nodes.</p>
<h2 id="heading-control-plane"><strong>Control Plane</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676130377384/14a120c4-c1b3-4053-9c80-e09d96a9abea.jpeg" alt class="image--center mx-auto" /></p>
<p>The Control Plane refers to a set of components that manage the state of the cluster and enforce the desired state of the system. The control plane components are responsible for maintaining the overall health and stability of the cluster and ensuring that the desired state of the system is maintained.</p>
<h3 id="heading-components-of-control-plane"><strong>Components of Control Plane</strong></h3>
<ol>
<li><h4 id="heading-api-server"><strong>API Server</strong></h4>
<p> The API Server is the central component that exposes the K8s API and serves as the front end for all cluster operations. It processes REST operations and acts as a bridge between the control plane and the etcd store. The API Server is responsible for maintaining the overall state of the cluster, including the current state of all objects such as Pods, Services, and Deployments.</p>
</li>
<li><h4 id="heading-etcd"><strong>etcd</strong></h4>
<p> etcd is a distributed, consistent key-value store used to store the configuration data for the cluster. It serves as the primary datastore for the cluster and holds all the information about the state of the system, including the current state of all objects. etcd is designed to be highly available, meaning that even if a node fails, the data stored in etcd is still available to the remaining nodes.</p>
</li>
<li><h4 id="heading-controller-manager"><strong>Controller Manager</strong></h4>
<p> The Controller Manager is a component that runs various controllers such as Replication Controller, Endpoint Controller, Namespace Controller, Node Controller, Service Account and Token Controllers and others, which handle routine tasks such as replicating Pods, managing node health, and handling endpoints. The Controller Manager is responsible for maintaining the desired state of the system and ensuring that any deviations from the desired state are corrected.</p>
</li>
<li><h4 id="heading-scheduler"><strong>Scheduler</strong></h4>
<p> The Scheduler is responsible for assigning Pods to nodes based on resource availability and other constraints. It determines the best node to run a Pod on and ensures that the resources required by the Pod are available on the chosen node. But the Scheduler only decides on which node the pod should be scheduled but the component which actually schedules the pod and starts the pod on the node is called <strong>KUBELET</strong>.</p>
</li>
</ol>
<h2 id="heading-worker-node"><strong>Worker Node</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676131365536/7ef4c494-8e4c-4509-aef7-d3e536ef55eb.png" alt class="image--center mx-auto" /></p>
<p>The worker node is responsible for executing and managing the containers that make up the applications in a cluster. It does this by performing many tasks such as Container Scheduling, Container Execution and Networking with the help of different components. It works closely with the control plane components to ensure that containers are running correctly and that the desired state of the system is maintained.</p>
<h3 id="heading-components-of-worker-node"><strong>Components of Worker Node</strong></h3>
<ol>
<li><h4 id="heading-container-runtime"><strong>Container runtime</strong></h4>
<p> A container runtime is software that runs containers on a node. Common container runtimes include Docker and rkt. The container runtime is responsible for starting and stopping containers, monitoring the health of containers, and reporting the status of containers to the control plane.</p>
</li>
<li><h4 id="heading-kubelet"><strong>Kubelet</strong></h4>
<p> The Kubelet is the primary agent running on each worker node. It communicates with the API server to receive commands to execute and reports the status of the node and its containers. The Kubelet is responsible for starting and stopping containers, managing volumes, and reporting the resource usage of the node to the control plane.</p>
</li>
<li><h4 id="heading-kube-proxy"><strong>kube-proxy</strong></h4>
<p> The kube-proxy is a network proxy that runs on each worker node. It is responsible for implementing the Kubernetes network model, which includes configuring network routes and enforcing network policies.</p>
<p> <em>or in simple language</em></p>
<p> Since many applications are running inside the node, so if any application replica is making a request for a database, instead of the service randomly forwarding the request to any replica it actually forwards it to the replica that is running on the same node that initiated the request.</p>
</li>
<li><h4 id="heading-pods"><strong>Pods</strong></h4>
<p> Pods are the smallest and simplest deployable units in Kubernetes. They represent a single instance of a running process in the cluster. A pod contains one or more containers, and all containers in a pod run on the same node and share the same network namespace. This means that they can communicate with each other using <a target="_blank" href="http://localhost">localhost</a>, and share the same IP address, hostname, and network resources.</p>
</li>
<li><h4 id="heading-container-network-interface-cni"><strong>Container network interface (CNI)</strong></h4>
<p> The CNI is a pluggable interface for connecting containers to a network. The CNI provides the necessary network configuration for containers, such as assigning IP addresses and configuring network routes</p>
</li>
</ol>
<h1 id="heading-components-of-kubernetes">Components of Kubernetes</h1>
<h2 id="heading-1-services"><strong>1. Services</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676135759373/f6c2da70-2a99-4dfa-926d-cd565c7d6553.png" alt class="image--center mx-auto" /></p>
<p>Services provide stable network identities for pods and are used to expose applications running in the cluster to external clients. Services provide a stable IP address and DNS name for pods, allowing external clients to communicate with the application even if the pods themselves are rescheduled to different nodes.</p>
<h3 id="heading-types-of-services"><strong>Types of Services</strong></h3>
<ol>
<li><p><strong>ClusterIP</strong></p>
<p> A ClusterIP service is the default service type in Kubernetes. It provides a stable IP address and DNS name for pods, allowing external clients to communicate with the application even if the pods themselves are rescheduled to different nodes.</p>
</li>
<li><p><strong>NodePort</strong></p>
<p> A NodePort service is a service that exposes a specific port on the nodes in the cluster, allowing external clients to access the application directly. This type of service is useful when an application requires a static IP address or when it needs to be accessible from outside the cluster.</p>
</li>
<li><p><strong>LoadBalancer</strong></p>
<p> A LoadBalancer service is used to provide external access to an application running in the cluster. It provides a stable IP address and DNS name that is load balanced across the nodes in the cluster, ensuring that external clients can access the application even if one or more nodes are unavailable.</p>
</li>
<li><p><strong>ExternalName</strong></p>
<p> An ExternalName service is used to map an external DNS name to a Kubernetes service. This type of service is useful when you want to access an external service from within the cluster without having to create a load balancer or expose a cluster IP address</p>
</li>
</ol>
<h2 id="heading-2-ingress"><strong>2. Ingress</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676135964921/f45b2830-9955-4c9a-94e6-819d6a1b65fa.jpeg" alt class="image--center mx-auto" /></p>
<p>Ingress provides external access to services in a cluster. It defines a set of rules for incoming traffic, allowing incoming requests to be directed to the correct service based on the URL or hostname.</p>
<p>Ingress works by providing a single entry point for external clients to access services in a cluster. It acts as a reverse proxy, routing incoming requests to the correct service based on the rules defined in the Ingress resource.</p>
<h3 id="heading-function-of-ingress"><strong>Function of Ingress</strong></h3>
<ol>
<li><p><strong>Load balance incoming traffic</strong></p>
<p> Ingress can be used to load balance incoming traffic across multiple replicas of a service, ensuring that the traffic is distributed evenly across the available resources.</p>
</li>
<li><p><strong>Provide SSL termination</strong></p>
<p> Ingress can be configured to provide SSL termination, allowing incoming SSL requests to be decrypted and redirected to the correct service in the cluster.</p>
</li>
<li><p><strong>Provide authentication and authorization</strong></p>
<p> Ingress can be configured to enforce authentication and authorization rules, allowing only authorized users to access specific services in the cluster.</p>
</li>
</ol>
<h2 id="heading-3-configmap">3. Confi<strong>gMa</strong>p</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676136430071/b9c9c35f-307f-47c1-b70e-049ddfb8db9e.png" alt class="image--center mx-auto" /></p>
<p>A ConfigMap allows you to manage configuration data for your applications. It provides a convenient way to store configuration data that can be easily referenced by your applications.</p>
<p>A ConfigMap is defined as a YAML file that contains key-value pairs. The keys in the ConfigMap can be referenced by environment variables, command-line arguments, or as files stored on disk. This allows you to manage configuration data in a centralized manner, making it easier to manage and update your applications.</p>
<p>ConfigMaps are useful in a number of ways:</p>
<ol>
<li><p><strong>Separating configuration data from code</strong>: By using a ConfigMap to store configuration data, you can keep configuration separate from the code of your applications. This makes it easier to update configuration data without having to update the code of your applications.</p>
</li>
<li><p><strong>Sharing configuration data between applications</strong>: ConfigMaps can be used to share configuration data between multiple applications, making it easier to manage common configurations across multiple applications.</p>
</li>
<li><p><strong>Making configuration data available to containers at runtime</strong>: The data stored in a ConfigMap can be easily referenced by containers at runtime, making it easy to manage configuration data for your applications.</p>
</li>
</ol>
<h2 id="heading-4-secret">4. Secret</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676136789078/a186376b-530a-4dd9-abe2-20f9265a2e4d.jpeg" alt class="image--center mx-auto" /></p>
<p>A Secret allows you to manage sensitive information, such as passwords, tokens, and SSL certificates. It provides a secure way to store and manage sensitive information in a cluster.</p>
<p>A Secret is defined as a YAML file that contains key-value pairs. The values in a Secret are encoded using base64 encoding, and they can be decoded by containers at runtime. This allows you to store sensitive information in a cluster without having to store it in clear text in your configuration files.</p>
<p>An Example of base64 coded password:-</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676138374689/92782a57-7179-4c98-a3ce-5ea4c672a0cd.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-5-volumes">5. Volumes</h2>
<p>Volumes are a way to persist data in a Kubernetes cluster. They allow you to store data that needs to persist even after a container has been deleted, such as data that needs to be shared between containers or data that needs to persist even if the cluster experiences failures or restarts.</p>
<p>Type of Volumes:-</p>
<ol>
<li><p><strong>EmptyDir</strong>: An EmptyDir volume is created when a Pod is created and deleted when the Pod is deleted. It is useful for sharing data between containers within a Pod.</p>
</li>
<li><p><strong>HostPath</strong>: A HostPath volume mounts a file or directory from the host node file system into a container. It is useful for sharing data between a container and the host node file system.</p>
</li>
<li><p><strong>PersistentVolumeClaim</strong>: A PersistentVolumeClaim is a request for storage by a user. It is used to dynamically provision storage from a storage class.</p>
</li>
</ol>
<h2 id="heading-6-statefulset">6. StatefulSet</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676138967585/44fce15d-cc1f-4d8a-86fb-c645472c05c2.png" alt class="image--center mx-auto" /></p>
<p>StatefulSets is used to manage stateful applications. A stateful application is an application that requires persistent storage, such as a database.</p>
<p>StatefulSets provide several key features to manage stateful applications in a Kubernetes cluster:</p>
<ol>
<li><p><strong>Stable Network Identities</strong>: Each Pod in a StatefulSet is assigned a stable hostname, which allows for the persistence of network identities across Pod restarts.</p>
</li>
<li><p><strong>Persistent Storage</strong>: StatefulSets provide a way to manage Persistent Volumes (PVs) and Persistent Volume Claims (PVCs), which are used to store data persistently in a Kubernetes cluster.</p>
</li>
<li><p><strong>Rolling Updates</strong>: StatefulSets provide a way to perform rolling updates to Pods, which allows for updating a stateful application without disrupting its operation.</p>
</li>
</ol>
<h2 id="heading-7-deployment">7. Deployment</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676139822893/9c6ceb89-bf56-4ba3-ba47-2e1f8af4b2d4.png" alt class="image--center mx-auto" /></p>
<p>Deployment provides a declarative way to manage and update Replication Controllers and Pods.</p>
<p>A Deployment defines the desired state for a set of Pods and ensures that the desired number of replicas are running and available. In case of a failure, the Deployment will automatically recover the Pods to the desired state.</p>
<h3 id="heading-features-of-deployment">Features of Deployment</h3>
<ol>
<li><p><strong>Rolling Updates</strong>: Deployments provide a way to perform rolling updates to Pods, which allows for updating an application without disrupting its operation.</p>
</li>
<li><p><strong>Rollback</strong>: Deployments provide a way to roll back to a previous version of the application in case of a failure.</p>
</li>
<li><p><strong>Scaling</strong>: Deployments provide a way to scale up or down the number of replicas of a Pod.</p>
</li>
</ol>
<h2 id="heading-8-labels-and-selectors">8. Labels and Selectors</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676141323958/5981044e-4f61-40d5-8679-f1e6ad361804.png" alt class="image--center mx-auto" /></p>
<p>Labels and selectors are used to organize and identify resources in a cluster.</p>
<p>Labels are key-value pairs that are attached to resources such as Pods, Services, and Deployments. Labels can be used to describe the characteristics of a resource, such as its environment, role, or version.</p>
<p>Selectors are used to select a set of resources based on their labels. A selector is a set of label requirements, expressed as key-value pairs. For example, a selector can be used to select all Pods that have a label with the key "environment" and value "production".</p>
<h1 id="heading-resources">Resources</h1>
<ul>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=X48VuDVv0do"><strong>Kubernetes Tutorial for Beginners</strong></a></p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=PN3VqbZqmD8&amp;list=PL5uLNcv9SibBrCVC9lKwRHOV6GjUbAhIn&amp;index=3"><strong>Kubernetes 101 workshop</strong></a></p>
</li>
<li><p><a target="_blank" href="https://kubernetes.io/">Kubernetes website</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/kubernetes/kubernetes"><strong>Kubernetes Github</strong></a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[GIT for Beginners]]></title><description><![CDATA[Git is a distributed version control system that allows developers and operations teams to collaborate and keep track of the changes made on a project. GIT as a DevOps tool empowers collaboration and faster release cycles.
Git has two repository type...]]></description><link>https://blog.devopscommunity.in/git-for-beginners</link><guid isPermaLink="true">https://blog.devopscommunity.in/git-for-beginners</guid><category><![CDATA[GitHub]]></category><category><![CDATA[Git]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Developer]]></category><dc:creator><![CDATA[Aman Srivastav]]></dc:creator><pubDate>Wed, 08 Feb 2023 10:06:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/wX2L8L-fGeA/upload/d3ba4c0baeb718bc6435297847acf9b5.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Git is a distributed version control system that allows developers and operations teams to collaborate and keep track of the changes made on a project. GIT as a DevOps tool empowers collaboration and faster release cycles.</p>
<p>Git has two repository types local and remote.</p>
<p>The local repository is on the local machine and the remote is on the central server. We can always pull/push our code/data on remote/local repositories as well and others can also make changes to the code/data.</p>
<p>In Local Repository, data will be passed to the staging area and then commit files. The working area is the untraced state.</p>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/ada2660d-24fd-445a-a7a8-5f711c472dc3/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230207%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230207T093847Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=9910351f4dfc576c593be221dc1a39d4d90ae9104e0e7040efcfcd65f99b2dee&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<h3 id="heading-what-is-github"><strong>What is GitHub?</strong></h3>
<p>GitHub is a web-based platform that provides hosting for software development projects that use the Git version control system. It allows developers to store, manage and track changes to their code over time, collaborate with others on projects and contribute to open-source projects.</p>
<h3 id="heading-differences-between-git-and-github"><strong>Differences between git and GitHub:</strong></h3>
<ol>
<li><p>Git is a version control system, while GitHub is a web-based hosting platform for Git repositories.</p>
</li>
<li><p>Git allows you to manage and keep track of your source code history locally, while GitHub provides a remote, collaborative environment.</p>
</li>
<li><p>Git commands are executed in the terminal, while GitHub offers a graphical user interface to interact with Git repositories.</p>
</li>
<li><p>Git is free and open-source, while GitHub offers both free and paid plans for individuals and organizations.</p>
</li>
<li><p>Git enables collaboration through remote repository cloning and merging, while GitHub provides additional collaboration features such as pull requests, issues, and project boards.</p>
</li>
</ol>
<h3 id="heading-git-installation"><strong>GIT installation</strong></h3>
<p>We can simply install git using this link to any of the platforms and by following the steps:</p>
<p><a target="_blank" href="https://git-scm.com/downloads"><strong>https://git-scm.com/downloads</strong></a></p>
<p>I have installed it using home-brew, please find the below commands for the same.</p>
<pre><code class="lang-bash">brew install git: To install git 
git --version: to check the current version of git
</code></pre>
<h3 id="heading-git-repository"><strong>Git repository:</strong></h3>
<p>A Git repository is a virtual storage area where you can save versions of your code. It is used to track changes in your source code over time and collaborate with other developers by sharing your code and contributing to others' code. The repository contains all the files, branches, and history of changes that have been made to the code.</p>
<p><img src="https://www.devopsuniversity.org/wp-content/uploads/2021/01/git-repository.jpg" alt="Git Tutorials - Git Commands - DevOps Courses and Certification" /></p>
<h3 id="heading-git-init"><strong>Git Init:</strong></h3>
<p>The git init command is the first command that you will run on Git. The git init command is used to create a new blank repository. It is used to make an existing project a Git project. Several Git commands run inside the repository, but the init command can be run outside of the repository.</p>
<pre><code class="lang-bash"><span class="hljs-comment">#git init : It is used to initialising git</span>
</code></pre>
<h3 id="heading-git-add"><strong>Git Add:</strong></h3>
<p>The git add command is used to add file contents to the Index (Staging Area). This command updates the current content of the working tree to the staging area. It also prepares the staged content for the next commit. Every time we add or update any file in our project, it is required to forward updates to the staging area.</p>
<pre><code class="lang-bash"><span class="hljs-comment">#git add &lt;file_name&gt;: It is used to put data in staging area</span>
</code></pre>
<h3 id="heading-git-status"><strong>Git Status:</strong></h3>
<p>The git status command is used to display the state of the repository and staging area. It allows us to see the tracked, untracked files and changes. This command will not show any commit records or information.</p>
<p>Mostly, it is used to display the state between Git Add and Git commit commands. We can check whether the changes and files are tracked or not.</p>
<pre><code class="lang-bash"><span class="hljs-comment">#git status: To check the status of data , after git add it should be in staging area</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675801004375/397e0d33-7091-4893-9276-fb87c730cc72.png?auto=compress,format&amp;format=webp" alt /></p>
<p>At times, we have to let it know who is making the changes in git to make a record of the changes.</p>
<pre><code class="lang-bash"><span class="hljs-comment">#git config [user.name](http://user.name) “aman”</span>

<span class="hljs-comment">#git config [user.email](http://user.email) “aman@example.com”</span>
</code></pre>
<p>Here are a few more useful commands :</p>
<pre><code class="lang-bash"><span class="hljs-comment">#git restore “file.txt” : It will store the previous version of file. </span>

<span class="hljs-comment">#git commit -m “updated text” : It will update the create a new story and saves the new copy of the story, this way we can avoid going to the text editor and updating it. </span>

<span class="hljs-comment">#git add . : with this command we can stage more than one file in staging area </span>

<span class="hljs-comment">#git log: To get the log details </span>

<span class="hljs-comment">#git log --name-only: Detailed logs with file names </span>

<span class="hljs-comment">#git log -n 1 : latest commit in the new repository.</span>
</code></pre>
<h3 id="heading-git-braches"><strong>GIT Braches:</strong></h3>
<p>In Git, branches are separate copies of a project that can be worked on independently. It is used to avoid mistakes in the master branch and separately we can run the code through git and test it.</p>
<p>They allow multiple team members to work on different features or bug fixes simultaneously without interfering with each other. Branches have their own set of commits and changes made on one branch do not affect the others. Developers can create pull requests to merge changes from one branch to another. The main branch is called "master" and is considered stable. Git flow is a branching model that helps Organise and track work in a structured way for large projects.</p>
<pre><code class="lang-bash"><span class="hljs-comment">#git branch “branch_name” : Create a new branch </span>

<span class="hljs-comment">#git checkout “branch_name”: switch to an existing branch</span>

<span class="hljs-comment">#git checkout -b “branch_name”: create a new branch and switch to it </span>

<span class="hljs-comment">#git branch -d “branch_name: To delete the branch ‘d stands for delete’</span>

<span class="hljs-comment">#git branch: List all the branches </span>

<span class="hljs-comment">#git branch -a: Lists all branches in the local and remote repository.</span>

<span class="hljs-comment"># git merge “branch_name”: 1. Merges the specified branch into the current branch. ( It can be done through master branch)</span>
</code></pre>
<p>Note: fast-forward merge: It can happen when the current branch has no extra commits compared to the branch which we are merging.</p>
<pre><code class="lang-bash"><span class="hljs-comment">#git pull "remote" "branch_name": Fetches the specified branch from the remote repository and merges it into the current branch.</span>

<span class="hljs-comment"># git branch -D "branch_name": Force deletes the specified branch.</span>

<span class="hljs-comment">#git branch -m "old_branch" "new_branch": Renames the specified branch.</span>

<span class="hljs-comment">#git stash: Stashes changes in the working directory, allowing you to switch branches without committing changes.</span>

<span class="hljs-comment">#git stash apply: Applies the changes stashed in the last stash command.</span>
</code></pre>
<p><strong>Head:</strong> It is the current location in the repository, it will move automatically whenever we switch branches.</p>
<p>As you can see the HEAD always points to the last commit on the currently checked-out branch.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675801866767/38f48c42-c3fe-4f31-81af-9fa5d8906f3b.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-git-remote"><strong>Git Remote:</strong></h3>
<p>In Git, the term remote is concerned with the remote repository. It is a shared repository that all team members use to exchange their changes. A remote repository is stored on code hosting services like an internal server, GitHub, Subversion, and more.</p>
<pre><code class="lang-bash">git remote: The given <span class="hljs-built_in">command</span> is providing the remote name as the origin. Origin is the default name <span class="hljs-keyword">for</span> the remote server, <span class="hljs-built_in">which</span> is given by Git.
git remote -v: The above output is providing available remote connections. If a repository contains more than one remote connection, this <span class="hljs-built_in">command</span> will list them all.
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1675803338775/a243a9a8-5a8a-4280-953d-97abdceee6a2.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-git-push-pull-andamp-clone"><strong>Git Push, pull &amp; clone:</strong></h3>
<p>The <strong>git push</strong> command is used to upload local commits to a remote repository. The syntax for the command is as follows:</p>
<pre><code class="lang-bash">git push [remote-name] [branch-name]
</code></pre>
<p>Where remote-name is the name of the remote repository, and branch-name is the name of the branch that you want to push to the remote repository. This command is typically used after making local changes and committing them to your local repository, to share those changes with others who are collaborating on the project.</p>
<p><strong>git pull</strong> is a command used in Git to retrieve new changes from a remote repository and merge them into the current branch. It is a combination of <strong><em>git fetch</em></strong> and <strong><em>git merge</em></strong> commands and is typically used to synchronize a local repository with a remote one.</p>
<pre><code class="lang-bash">git pull &lt;remote&gt; &lt;branch&gt;
</code></pre>
<p>where &lt;remote&gt; is the name of the remote repository and &lt;branch&gt; is the name of the branch you want to pull changes from.</p>
<p><strong>git clone:</strong> The git clone command is used to create a copy of a remote repository on a local machine.</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> [remote-repository-url]
</code></pre>
<p>Where remote-repository-URL is the URL of the remote repository that you want to clone. This command creates a new directory with the same name as the remote repository and copies all of the files and their history into the new directory. This is useful for creating local copies of remote repositories for development or collaboration.</p>
<h3 id="heading-git-fetch-and-merge">Git Fetch and Merge:</h3>
<p><strong>Git Fetch:</strong> Git Fetch is a command used in Git to retrieve new changes from a remote repository and bring them into the local repository. It is typically used to synchronize a local repository with a remote one. The command retrieves the changes but does not merge them into the local repository, allowing the user to review the changes before merging them. The command is used in the following format:</p>
<pre><code class="lang-bash">git fetch &lt;remote&gt; &lt;branch&gt;
</code></pre>
<p>where &lt;remote. is the name of the remote repository and &lt;branch&gt; is the name of the branch you want to fetch the changes from. If no branch is specified, git fetch retrieves changes from all branches of the remote repository.</p>
<p>Git Merge: git merge is a command used in Git to combine changes from multiple branches into the current branch. It is typically used to bring changes from a remote branch into the local branch that you are currently working on.</p>
<pre><code class="lang-bash">git merge &lt;branch&gt;
</code></pre>
<p>where &lt;branch&gt; is the name of the branch you want to merge into the current branch.</p>
<p>For Ex: If we have two commit messages from two different users we can correct them (Incase of wrong msgs) and commit the changes with new changes.</p>
<h3 id="heading-fork">Fork:</h3>
<p>A fork in Git is simply a copy of an existing repository in which the new owner disconnects the codebase from previous committers. A fork often occurs when a developer becomes dissatisfied or disillusioned with the direction of a project and wants to detach their work from that of the original project.</p>
<p>A <code>git fork</code> is a feature of Git and GitHub that allows users to create a separate copy of a repository under their account. This allows users to make changes to the code and propose changes to the original repository.</p>
<p>On the other hand, <code>git clone</code> is a command used to copy an existing Git repository from a remote server to a local computer, <code>git clone</code> creates a local copy of the entire repository, including all branches, commits, and history. The cloned repository is linked to the original repository, so changes can be easily synced between the two.</p>
<p>In summary, <code>git fork</code> is a feature of Git and GitHub that enables users to create a separate copy of a repository, while <code>git clone</code> is a Git command used to create a local copy of a remote repository, The main difference is that <code>git fork</code> creates a separate copy under a different account, while <code>git clone</code> creates a local copy of the entire repository.</p>
<h3 id="heading-git-rebase"><strong>Git rebase:</strong></h3>
<p><strong>Git rebase</strong> is a command used to reapply commits from one branch on top of another branch. The **<code>git rebase</code>**command is used to perform this operation. The main use of rebasing is to clean up a feature branch by removing unnecessary merge commits and making a linear history, making it easier to understand and maintain the branch.</p>
<pre><code class="lang-bash">git rebase master
</code></pre>
<p>In general, **<code>git merge</code>**is a safer option for integrating changes in a collaborative environment, as it preserves the history of both branches and makes it clear when and how changes were integrated. On the other hand, **<code>git rebase</code>**can be useful for cleaning up a branch's history and avoiding multiple merge commits, but it should only be used in a non-collaborative environment or with caution in a collaborative one.</p>
<h3 id="heading-git-cherry-pick"><strong>git cherry-pick:</strong></h3>
<p>**<code>git cherry-pick</code>**is a Git command that allows you to select and apply specific commits from one branch onto another branch. This can be useful when you want to apply specific changes from a branch without merging the entire branch into yours.</p>
<pre><code class="lang-bash">git cherry-pick &lt;<span class="hljs-built_in">hash</span> of the commit &gt;
</code></pre>
<h3 id="heading-git-revert-and-reset"><strong>Git Revert and Reset:</strong></h3>
<p><strong>Git Revert:</strong> Used to revert the commit already done, it is used to undo changes and keep those changes in the git history. Below are two general commands that we use, there are a few more commands.</p>
<pre><code class="lang-bash">git revert &lt;commit-ish&gt;: The commit option is used to revert a commit. To revert a commit, we need the commit reference id. The git <span class="hljs-built_in">log</span> <span class="hljs-built_in">command</span> can access it.
git revert -e &lt;commit-ish&gt;: It is used to edit the commit message before reverting the commit. ( -e stands <span class="hljs-keyword">for</span> edit)
</code></pre>
<p><strong>Git Reset:</strong> "git reset" is a Git command that is used to reset the state of your Git repository to a specific commit or a specific state. The reset command can be used to unstaged changes, to unmodified files, or to reset the branch pointer to a different commit.</p>
<p>There are three main options for <code>git reset;--mixed</code>, <code>--soft</code>, and <code>--hard</code>. The default option is <code>--mixed</code>, which resets the branch pointer to the specified commit, but keeps the changes in the working directory and index, allowing you to review and recommit the changes if desired.</p>
<pre><code class="lang-bash">git reset [commit]: Resetting the branch pointer to a specific commit:
git reset: This <span class="hljs-built_in">command</span> unstages changes, leaving them <span class="hljs-keyword">in</span> the working directory.
git reset [file]: This <span class="hljs-built_in">command</span> unmodifies the specified file, leaving it <span class="hljs-keyword">in</span> the working directory.
git reset --hard [commit]: This <span class="hljs-built_in">command</span> resets the branch and index to the specified commit, discarding all changes <span class="hljs-keyword">in</span> the working directory.
</code></pre>
<p>Here is a picture that will help you to understand more:</p>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/d314c312-226c-48a9-a8ca-b25d62004177/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230225%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230225T152951Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=c4dcc9e0463cf00a2ddc8903862e9f56891d541a71dcd933f324ac0a162c290e&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<h3 id="heading-stashing"><strong>Stashing:</strong></h3>
<p>"Stashing" in Git is a way to temporarily save changes that have not yet been committed to a branch. This allows you to switch to a different branch, perform some other work, and then later come back and apply the stashed changes. The stash is stored as a "stack" of changes, so you can have multiple stashes and choose which one to apply later.</p>
<pre><code class="lang-bash">git stash : To save the changes 
git stash save <span class="hljs-string">"&lt;Stashing Message&gt;"</span> : We can save the changes with a message
git stash list: It gives the list of all stash changes with stash@{} ID. 
git stash apply: <span class="hljs-built_in">command</span> is used to reapply changes that were previously stashed using git stash. This allows you to switch back to a branch you were working on, apply the stashed changes, and <span class="hljs-built_in">continue</span> your work.
git stash apply &lt;stash id&gt;: It will save the changes to the stage were you want. 
git stash show: This <span class="hljs-built_in">command</span> will show the file that is stashed and changes made on them.
git stash show -p: -p stands <span class="hljs-keyword">for</span> the partial stash. The given <span class="hljs-built_in">command</span> will show the edited files and content.
git stash pop: It is a Git <span class="hljs-built_in">command</span> that combines the functionality of both git stash apply and git stash drop. It reapplies the changes from the most recent stash, and <span class="hljs-keyword">then</span> removes the stash from the stash list.
git stash drop: This <span class="hljs-built_in">command</span> is used to delete a stash from the queue. Generally, it deletes the most recent stash.
git stash drop &lt;stash id&gt;: To delete a particular stash from the queue.
git stash clearL: This <span class="hljs-built_in">command</span> allows deleting all the available stashes at once.
git stash branch &lt;Branch Name&gt;: This <span class="hljs-built_in">command</span> will create a new branch and transfer the stashed work on that.
</code></pre>
<h3 id="heading-git-reflog"><strong>Git Reflog:</strong></h3>
<p>"git reflog" is a Git command that provides a log of changes to the state of your Git repository, including branch references and HEAD. The reflog is stored in the <code>.git/logs/</code> directory and is a record of all changes to the branches and HEAD in your repository, including branch updates, resets, and other operations.</p>
<p>The reflog is useful in a variety of situations, including:</p>
<ol>
<li><p>Recovering lost commits: If you have lost a commit or made a mistake during a reset or rebase operation, you can use the reflog to find the commit and recover it.</p>
</li>
<li><p>Debugging repository state: If you are unsure of the state of your repository, you can use the reflog to see all the changes that have been made, and how they have affected the branches and HEAD.</p>
</li>
<li><p>Undoing operations: If you have made a mistake or want to undo an operation, you can use the reflog to find the state of the repository before the operation and reset the repository to that state.</p>
</li>
</ol>
<pre><code class="lang-bash">git reflog: Gives all the alternation <span class="hljs-keyword">done</span> previously. Its shows all the changes <span class="hljs-keyword">in</span> the repo

git reset --hard [commit-reference-in-reflog]: The git reflog <span class="hljs-built_in">command</span> is an essential tool <span class="hljs-keyword">for</span> Git users, as it provides a way to recover from mistakes and track changes to the state of the repository. It is recommended that you regularly run git reflog to keep track of changes to your repository and to ensure that you have a way to recover from mistakes.
</code></pre>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/9cf5e4d2-147d-4931-9b8a-2b14f8cd38a4/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230207%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230207T211838Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=011020a7d1270caea826c5164dfc57edafbd3e5b3df3940e960da339d9460f75&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<p>GIT Object Contents:</p>
<ol>
<li><p>Commit: It is simply a commit.</p>
</li>
<li><p>Tree: It is a folder on our File system, associated with a repository.</p>
</li>
<li><p>blob: It is just a piece of data.</p>
</li>
</ol>
<p>NOTE: All the commits are saved in ./.git/objects folder.</p>
<p><img src="https://s3.us-west-2.amazonaws.com/secure.notion-static.com/b61a5e7d-3c41-4463-a68c-1406841ca88c/Untitled.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&amp;X-Amz-Credential=AKIAT73L2G45EIPT3X45%2F20230207%2Fus-west-2%2Fs3%2Faws4_request&amp;X-Amz-Date=20230207T211926Z&amp;X-Amz-Expires=86400&amp;X-Amz-Signature=16572310534eda825146969fdf9b866a9a9d1caae8b12f555b6f22e0bf031f3b&amp;X-Amz-SignedHeaders=host&amp;response-content-disposition=filename%3D%22Untitled.png%22&amp;x-id=GetObject" alt /></p>
<p>Thank you for your valuable time !!</p>
]]></content:encoded></item><item><title><![CDATA[Ansible Beginner]]></title><description><![CDATA[What is Ansible ?
Ansible is an open-source automation platform that can help you manage and control various nodes from a central location. It allows you to automate tasks, including configuration management, application deployment, and task executio...]]></description><link>https://blog.devopscommunity.in/ansible-beginner</link><guid isPermaLink="true">https://blog.devopscommunity.in/ansible-beginner</guid><category><![CDATA[ansible]]></category><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Aman Srivastav]]></dc:creator><pubDate>Thu, 19 Jan 2023 13:37:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/npxXWgQ33ZQ/upload/d6876a21cad961a72520037939e17c9b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-what-is-ansible">What is Ansible ?</h3>
<p><strong>Ansible</strong> is an open-source automation platform that can help you manage and control various nodes from a central location. It allows you to automate tasks, including configuration management, application deployment, and task execution.</p>
<p>It is a radically simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications— automate in a language that approaches plain English, using SSH, with no agents to install on remote systems.</p>
<p>Some key features of ansible include:</p>
<ol>
<li><p>Simple to use: Ansible uses an easy-to-learn language (YAML)and has a simple, straightforward architecture.</p>
</li>
<li><p>Agentless: ansible does not require any additional software to be installed on managed nodes.</p>
</li>
<li><p>Idempotent: ansible tasks are designed to be run multiple times without causing unintended side effects.</p>
</li>
<li><p>Large community and support: ansible is an open-source platform with a large user base, meaning there is a wealth of resources and support available.</p>
</li>
</ol>
<h3 id="heading-what-are-the-benefits-of-learning-ansible-basics"><strong>What Are the Benefits of Learning Ansible Basics?</strong></h3>
<ul>
<li><p>A free and open-source community project with a huge audience.</p>
</li>
<li><p>Battle-tested over many years as the preferred tool of IT wizards.</p>
</li>
<li><p>Easy to start and use from day one, without the need for any special coding skills.</p>
</li>
<li><p>Simple deployment workflow without any extra agents.</p>
</li>
<li><p>Includes some sophisticated features around modularity and reusability that come in handy as users become more proficient.</p>
</li>
<li><p>Extensive and comprehensive official documentation that is complemented by a plethora of online material produced by its community.</p>
</li>
</ul>
<h3 id="heading-ansible-architecture"><strong>Ansible Architecture:</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673988324454/8831f9c5-7c1f-48bc-ba09-ca35707a7305.png?auto=compress,format&amp;format=webp" alt /></p>
<p>From the diagram above, we understand that visibility comes into play:</p>
<ol>
<li><p>Everything from servers to desktop computers to network configurations and applications can all be stored in a Public/Private Cloud.</p>
</li>
<li><p>It demonstrates that it has all the features of a sizable cloud platform, but it also allows users to communicate with all of the modules and API. It also demonstrates that it has security measures.</p>
</li>
<li><p>An inventory file contains devices, groups, host variables, templates, and tasks. Along with the inventory file, playbooks must be created that describe how to handle all the devices, groups, host variables, and templates.</p>
</li>
<li><p>Once Ansible has a task or set of tasks in a playbook to run, it needs to know where to run those tasks. It needs an inventory of hosts. Ansible has a concept called “inventories” that consist of lists of hosts to perform actions stored in files such as YAML.</p>
</li>
<li><p>Modules in Ansible allow you to interact with the cloud like Azure or on-prem resources like Hyper-V. An Ansible module is a standalone, reusable script that can be used with the Ansible API. They can also be used by Ansible playbooks. One of the key factors of an Ansible module is that they are reusable and not meant for just one environment.</p>
</li>
<li><p>The augmented ansible core is created by combining all of the cache, logging purposes, ansible’s functioning, and so on.</p>
</li>
<li><p>It allows for the creation of various agentless frames that can be utilized in multiple automated networks.</p>
</li>
<li><p>A single repository can contain all the computers of an operational or IT infrastructure network.</p>
</li>
<li><p>Ansible is being used to make Linux and Unix machines automated.</p>
</li>
<li><p>It provides the needed APIs for the interaction of the end-to-end modules.</p>
</li>
</ol>
<h3 id="heading-ansible-modules"><strong>Ansible modules</strong></h3>
<p><strong>Ansible Modules</strong> are pre-written pieces of code that can be used in ansible playbooks to perform specific tasks. There are hundreds of modules available, and they can be used to manage everything from software installations to server configurations.</p>
<p>Here is an example of an ansible copy module:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673986649578/5a2c323c-621b-47af-a53c-88edc2897ed7.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-ansible-playbook"><strong>Ansible playbook</strong></h3>
<p><strong>Ansible playbook</strong> is a YAML file that defines a series of tasks to be executed by Ansible. It specifies the managed nodes on which the tasks should be run and the order in which they should be executed.</p>
<p>Here is a list of the most commonly used Ansible Modules:</p>
<p>1. Ansible Package Module: Used to install, upgrade, and remove packages on remote hosts.</p>
<p>2. Ansible Service Module: Used to manage services on remote hosts.</p>
<p>3 Ansible File Module: Used to manage files and directories on remote hosts.</p>
<p>4. Ansible Copy Module: Used to copy files from the local machine to the remote hosts.</p>
<p>5. Ansible Fetch Module: Used to fetch files from remote hosts to the local machine.</p>
<p>6. Ansible Template Module: Used to create configuration files from templates.</p>
<p>7. Ansible Shell Module: Used to execute shell commands on remote hosts.</p>
<p>8. Ansible Debug Module: Used to print out debugging information.</p>
<p>9. Ansible Setup Module: Used to collect information about remote hosts.</p>
<p>10. Ansible Git Module: Used to manage git repositories on remote hosts.</p>
<p>11. Ansible Yum Module: Used to manage packages on Red Hat-based systems.</p>
<p>12. Ansible Apt Module: Used to manage packages on Debian-based systems.</p>
<p>13. Ansible Command Module: Used to execute commands on remote hosts.</p>
<p>14. Ansible Script Module: Used to execute scripts on remote hosts.</p>
<p>15. Ansible Cron Module: Used to manage cron jobs on remote hosts.</p>
<p>16. Ansible User Module: Used to manage user accounts on remote hosts.</p>
<p>17. Ansible Group Module: Used to manage user groups on remote hosts.</p>
<p>18. Ansible SELinux Module: Used to manage SELinux policies on remote hosts.</p>
<p>19. Ansible Firewall Module: Used to manage firewall rules on remote hosts.</p>
<p>20. Ansible Cloud Modules (AWS, GCP, Azure, etc.): Used to manage cloud resources on remote hosts.</p>
<p><strong>Ansible Inventory</strong> is a way for Ansible to track the information about the systems it is managing. It is a collection of data that describes the hosts and their characteristics. The inventory can be a simple list of hostnames or IP addresses, or it can be a more complex file that describes the hosts in groups and includes additional information about each host.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673987394550/3af226f3-8a4b-4a37-aa24-825f79464f40.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-ansible-conditions"><strong>Ansible conditions</strong></h3>
<p><strong>Ansible conditions</strong> allow you to control the flow of execution in your playbooks and tasks based on certain conditions. The most common way to use conditions in Ansible is by using the when statement. "<strong>when</strong>" statement is used to specify a condition that must be met before a task is executed.</p>
<p>Here is a simple example for the same, you can always use them according to the conditions and your convenience.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673987488196/d40c6c72-541a-4ab8-b5a0-5080ce9aa891.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-ansible-roles"><strong>Ansible roles</strong></h3>
<p><strong>Ansible roles</strong> are a way to organize and reuse Ansible tasks, files, and variables. Each role is defined in a single directory and it has a specific name that can be referenced in the playbook. Roles can be stored in the Ansible galaxy or in a local directory.</p>
<h3 id="heading-ansible-galaxy"><strong>Ansible Galaxy</strong></h3>
<p><strong>Ansible Galaxy</strong> is a public library of Ansible roles that can be easily downloaded and used in your playbooks. Ansible Galaxy is a command-line tool that is included with the Ansible installation, so you don't have to install it separately. And the enterprise version is in GUI as well offered by Red-hat.</p>
<p>The command to install ansible-galaxy is below. You can also find more about ansible-galaxy on <a target="_blank" href="https://galaxy.ansible.com/"><strong>https://galaxy.ansible.com/</strong></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673987579609/35f29de3-c35b-46e3-8042-5eb35fc70b2d.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-ansible-tower"><strong>Ansible Tower</strong></h3>
<p><strong>Ansible Tower</strong> is a management tool for Ansible, an open-source automation engine that automates software provisioning, configuration management, and application deployment. Ansible Tower provides a centralized interface for managing and organizing Ansible automation in an enterprise environment.</p>
<ul>
<li><p>Role-based access control: Allows users to control access to Ansible automation based on roles and permissions.</p>
</li>
<li><p>Scheduling: Allows users to schedule Ansible automation to run at specific times or specific intervals.</p>
</li>
<li><p>Notifications: Allows users to receive notifications when automation runs or when certain events occur.</p>
</li>
<li><p>Reporting: Provides detailed reporting on the execution of Ansible automation, including success rates, run times, and output.</p>
</li>
<li><p>APIs: Provides a RESTful API for automating and integrating Ansible Tower with other tools and systems.</p>
</li>
</ul>
<p>Ansible tower is used in IT operations and DevOps teams to automate repetitive tasks, quickly deploy applications and manage infrastructure. It provides a graphical user interface to manage Ansible playbooks, inventory, and scheduled jobs. It also allows you to delegate certain automation tasks to specific teams or individuals and provides reporting and analytics to track and measure automation success.</p>
<h3 id="heading-ansible-loops"><strong>Ansible loops</strong></h3>
<p><strong>Ansible loops</strong> allow you to repeat tasks or actions multiple times, based on a set of data. Loops are defined using the with_*keywords, where *is the type of loop you want to use. There are several types of loops in Ansible, including with_items, with_dict, with_file, with_sequence and with_subelements, each with its use cases.</p>
<p>Here is an example of using the with_items loop to install multiple packages, Also we can always use it according to our convenience like dictionary, file, sequence etc.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673988121004/16c2bfb1-cb35-454e-af42-a3f3d5ff7bc0.png?auto=compress,format&amp;format=webp" alt /></p>
<h3 id="heading-ansible-variables"><strong>Ansible Variables</strong></h3>
<p><strong>Ansible Variables:</strong> Ansible uses variables to manage differences between systems. With Ansible, you can execute tasks and playbooks on multiple different systems with a single command. To represent the variations among those different systems, you can create variables with standard YAML syntax, including lists and dictionaries. You can define these variables in your playbooks, in your inventory, in reusable files or roles, or at the command line. You can also create variables during a playbook run by registering the return value or values of a task as a new variable.</p>
<h3 id="heading-thank-you-all-for-giving-your-valuable-time-for-reading"><strong>Thank you all for giving your valuable time for reading.</strong></h3>
<h3 id="heading-resources"><strong>Resources :</strong></h3>
<p><strong>#Kunal Kushwaha</strong></p>
<p><strong>#WeMakeDevs</strong></p>
<p><strong>#Techworld with Nana</strong></p>
<p><strong>#Kodekloud</strong></p>
]]></content:encoded></item><item><title><![CDATA[Docker in Practice. Part 1( Basics)]]></title><description><![CDATA[Introduction: What is Docker, and Why is it so Important?
In simple words, Docker is a tool that allows developers, sys-admins, etc., to easily deploy their applications in containers to run on the host operating system, i.e. Linux.
The primary advan...]]></description><link>https://blog.devopscommunity.in/docker-in-practice-part-1-basics</link><guid isPermaLink="true">https://blog.devopscommunity.in/docker-in-practice-part-1-basics</guid><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[containers]]></category><category><![CDATA[Tutorial]]></category><category><![CDATA[Devops articles]]></category><dc:creator><![CDATA[Debasish Biswas]]></dc:creator><pubDate>Fri, 13 Jan 2023 14:41:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673620002928/6c1c3ff0-7cb4-42d4-b985-591dd15827ad.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction-what-is-docker-and-why-is-it-so-important">Introduction: What is Docker, and Why is it so Important?</h1>
<p>In simple words, Docker is a tool that allows developers, sys-admins, etc., to easily deploy their applications in containers to run on the host operating system, i.e. Linux.</p>
<p>The primary advantage of Docker is that it enables developers to <strong>compile a program and all of its dependencies into a single, standardized unit</strong> for software development. It allows developers to package and distribute their applications fast, consistently, and reliably. It also makes it easy to run those applications in various environments, such as on a developer's local machine, in a testing environment, or in a production environment.<br />Docker also makes it easier to scale applications, as containers can be easily moved from one host to another.</p>
<h2 id="heading-what-was-before-docker">What was before Docker?</h2>
<p>Not Docker in particular, but before containers, there were operating system virtualization, chroot, and resource isolation.<br />OS-virtualization, which involves creating multiple virtual environments(VMs) on a single physical host. Which can run their own operating systems and applications and are isolated from each other. Operating system virtualization has been around for decades and is still in use today.<br />Another technology that preceded containers is chroot, which allows a user to change the root directory for a process and its children. This can be used to isolate a process and its dependencies from the rest of the system.<br />Overall, containers build upon these and other technologies and concepts to provide a lightweight and portable way to isolate and run applications.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671341523435/mS_CASNhy.png" alt="Docker vs VMs" class="image--center mx-auto" /></p>
<h2 id="heading-advantages-of-docker">Advantages of Docker</h2>
<p>Containers have several advantages over prior technologies, such as they are more <strong>Lightweight</strong>, <strong>Portable</strong>, and <strong>Isolated.</strong> Containers also allow developers to package and distribute applications <strong>consistently</strong>, making it easier to develop and test applications. It can be easily <strong>automated</strong>, which can save time and reduce the risk of errors when building, testing, and deploying applications.</p>
<p>Overall, containers offer a lightweight and efficient way to isolate and run applications, making them an attractive option for developers.</p>
<p><mark>If you are already a Developer or just started with development (Frontend, Backend, Node dev, Python-dev, etc) knowledge of Containers will be incredibly beneficial to your development journey. And if you are planning to learn DevOps it is something necessary to know. This Blog on Docker will help you learn the Docker &amp; container concepts very easily, with examples.</mark></p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>There are no particular skills needed for this tutorial beyond a basic comfort with the UNIX command line and using a text editor. Prior experience in developing web applications will be beneficial but is not required. If you're interested in following along, please create an account on <a target="_blank" href="https://hub.docker.com/">Docker Hub</a>.</p>
<h1 id="heading-lets-do-a-hands-on-on-docker-containers">Let's do a hands-on on Docker containers.</h1>
<p>To work with Docker, it is not necessary to know the Docker architecture or how Docker works [<a target="_blank" href="https://docs.docker.com/get-started/overview/#:~:text=Docker%20architecture,to%20a%20remote%20Docker%20daemon.">You can see that here</a>.], but we need to understand some of the terms.</p>
<ol>
<li><p><strong>Image</strong>: A Docker image is a lightweight, standalone, and executable package that includes everything an application needs to run, including the application code, libraries, dependencies, and runtime.</p>
</li>
<li><p><strong>Container</strong>: A Docker container is a running instance of a Docker image. A container allows an application to be run in a lightweight, isolated environment.</p>
</li>
<li><p><strong>Registry</strong>: A Docker registry is a centralized storage location for Docker images. Docker provides a free public registry called Docker Hub, which allows users to store and distribute their images.</p>
</li>
<li><p><strong>Dockerfile</strong>: A Dockerfile is a text file that contains the instructions for building a Docker image. It specifies the base image, the dependencies, and the commands needed to run the application.</p>
</li>
</ol>
<p>Let's run the first command.</p>
<pre><code class="lang-bash">docker run hello-world
</code></pre>
<p>If your system is running Docker properly, you should have an output saying, "Hello from Docker!"</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671343875065/JVWuL9tWQ.png" alt /></p>
<p><code>docker run &lt;IMAGE_NAME&gt;</code> is just spinning up containers of the specified image.</p>
<p>You can also see the 2<sup>nd</sup> and the 3 <sup>rd</sup> lines where the docker first searches for the image locally, and if it is unable to find it, it goes and look at the DockerHub and pull the image from there.</p>
<p>There are many flags we can use in the <code>docker run</code> like <code>--name &lt;CONTAINER_NAME&gt;</code> that will give a specified name to the container, which we can use to refer to instead of the container ID; we also use the <code>-d</code> flag which will run the container in the background and print out the container ID.</p>
<p>But you might ask, Hey Debasish, what do you mean by background? Where is the information about containers running in the background? How can I access them?</p>
<p>Trust me, we will explore that.</p>
<pre><code class="lang-bash">docker run -d --name myNginx nginx
</code></pre>
<p>This command will run a container in the background with the name "myNginx". If you remove the <code>-d</code> flag, you will see that the terminal has frozen. Because the container will take over the terminal.</p>
<p>Now, if you run <code>docker container ls</code>, you can see the information about all the running containers(you can also use <code>docker ps</code>). Container ID, image name, status, ports, and container name.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671358719826/4PfxBlHr5.png" alt /></p>
<p>Ok, so my Ngnix server is running, but how can I access it? Can I simply curl <code>http://localhost:80</code>? If you try this, you will not get any response. Now you may say, "hey Debasish though my Nginx server is running, why wouldn't I be able to use it? Or how can I access my Nginx server?"</p>
<p>The answer is that the container is running on our local machine, but the Nginx server we are trying to access is actually running inside the container. <strong>A container is not the server or the application we will run on the Docker but the isolated environment where our applications or servers will run</strong>. So we will be able to access the server from inside the container.</p>
<p>So, how can we go inside the container?</p>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> -it myNginx bash
</code></pre>
<p>Booom, we are inside the container, and your terminal initials will look like <code>root@&lt;container_id&gt;:/#</code>. Now you can <code>curl http://localhost:80</code>, and we will get the raw nginx home page as a response from the server.</p>
<p>Here <code>docker exec &lt;container_id&gt; &lt;command&gt;</code> will run the specified command inside the corresponding container, and <code>-it</code> is for an interactive terminal, so after opening the container's bash shell, we will stay inside. We can also run any command and open any other available shell, i.e., sh, bash etc.</p>
<p>There is another way to access any serving port from outside the container, and we can poke a hole into the container, publish port no inside the container, and forward it to an external port. We can do it while starting the container.</p>
<p>Let's stop the running container first by running.</p>
<pre><code class="lang-bash">docker container stop myNginx
</code></pre>
<p>Now let's start the container again while publishing container port 80 to port 8080 into our local machine.</p>
<pre><code class="lang-bash">docker run -d -p 8080:80 --name portNginx nginx
</code></pre>
<p><code>-p &lt;host_port_no&gt;:&lt;container_port_no&gt;</code> now we can access the Nginx home page by requesting http://localhost:8080. It will forward the request to the container port <code>80</code> and get back the response from there.</p>
<p>There are also some basic commands like</p>
<ol>
<li><p><code>docker search</code>: Search the Docker Hub for images. Use the docker search command followed by the search term.</p>
<pre><code class="lang-bash"> docker search ubuntu
</code></pre>
</li>
<li><p><code>docker pull</code>: To pull an image or a repository from a registry, use the <code>docker pull</code> command followed by the image or repository name.</p>
<pre><code class="lang-bash"> docker pull ubuntu:18.04
</code></pre>
</li>
<li><p><code>docker stop</code>: To stop a running container, use the <code>docker stop</code> command followed by the container name or ID.</p>
<p> First, we start the container by <code>docker run --name my-container ubuntu:18.04</code></p>
<pre><code class="lang-bash"> docker stop my-container
</code></pre>
</li>
<li><p><code>docker start</code>: To start a stopped container, use the <code>docker start</code> command followed by the container name or ID.</p>
<pre><code class="lang-bash"> docker start my-container
</code></pre>
</li>
<li><p><code>docker rm</code>: To remove one or more containers, use the docker rm command followed by the container name or ID.</p>
<pre><code class="lang-bash"> docker rm my-container
</code></pre>
<p> We need to use <code>-f</code> to remove a running container forcefully.</p>
</li>
<li><p><code>docker rmi</code> : To remove one or more images, use the <code>docker rmi</code> command followed by the image name or ID.</p>
<pre><code class="lang-bash"> docker rmi ubuntu:18.04
</code></pre>
</li>
<li><p><code>docker inspect</code>: To display detailed information about a Docker container or image. You can specify the name or ID of the container or image as an argument.</p>
<p> For example, to inspect a container called <code>my-container</code>, we can use the following command:</p>
<pre><code class="lang-bash"> docker inspect my-container
</code></pre>
<p> This will display detailed information about the my-container container, including the container's ID, name, state, image, and configuration.</p>
</li>
</ol>
<h3 id="heading-link-to-part-2httpsdebasishbswshashnodedevdocker-in-practice-part-2"><a target="_blank" href="https://debasishbsws.hashnode.dev/docker-in-practice-part-2">Link to Part 2</a></h3>
<p>In part 2, we will dive deeply into Dockerfile, how to build and dockerize applications, push them to DockerHub, and learn about Docker volume and Docker networks.</p>
]]></content:encoded></item><item><title><![CDATA[Ci♾️cd]]></title><description><![CDATA[Hey everyone, hope you are doing fine, if not you will be fine:)
And you are here to read about (CICD) Let's talk about cicd what is it? why it's involved in our lives? i will make sure after reading this blog, you can breathe good stuff regarding CI...]]></description><link>https://blog.devopscommunity.in/cicd</link><guid isPermaLink="true">https://blog.devopscommunity.in/cicd</guid><category><![CDATA[BlogsWithCC]]></category><category><![CDATA[Devops]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[CI/CD]]></category><dc:creator><![CDATA[kunreddy shyamsunder reddy]]></dc:creator><pubDate>Wed, 11 Jan 2023 19:03:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673444276000/55bdf91f-8f08-4649-9956-4742a6d0355b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone, hope you are doing fine, if not you will be fine:)</p>
<p>And you are here to read about (CICD) Let's talk about cicd what is it? why it's involved in our lives? i will make sure after reading this blog, you can breathe good stuff regarding CICD and you will get a clear idea regarding it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673440338398/e513cb53-ee45-43e0-afcf-14d37a39105f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-now-lets-understand-what-is-ci-and-cd">Now, let's understand what is ci and cd.</h3>
<h1 id="heading-ci-continuous-integration">CI (continuous integration):-</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673418494081/b9b1575e-d0ed-4f50-b348-c5fa5709a9c3.jpeg" alt class="image--center mx-auto" /></p>
<p>Let's think about a situation if we as a developer developed a code and shared it to a codebase such as a git repository, where people in our team also share their version of the code.</p>
<p>And we are doing this continuously for weeks and the code works fine in the development area. but after testing suppose it's failing in the deployment server or building stage or any other stage.</p>
<p>Again you need to look back into the issue and it's not time efficient and cost-efficient also. so what if the code is tested at the same time as committing the changes, and how good it will be if we find out the errors at the initial stage of developing the code itself and fixing them.</p>
<p>It will be good and this method is time efficient and also cost-efficient. So the explanation of continuous integration is --&gt;</p>
<p>****<strong>the moment when we commit the code into a repository at that instance itself the process of build and automation test initiates, so we will be confident enough that our code is working fine.</strong></p>
<p>. And this happens for every commit that we make.</p>
<p>. So with this continuous integration, we will easily look up the issues in the early stages instead of writing the entire code for weeks and testing it later.</p>
<p>now let's what is "CONTINOUS DEPLOYMENT" ----&gt;</p>
<h2 id="heading-cdcontinuous-deployment">CD(<strong>continuous deployment</strong>) :-</h2>
<p>see many of us think that continuous delivery and continuous deployment is the same, but they are not.</p>
<p>let's talk first about</p>
<h3 id="heading-continuous-delivery">continuous delivery:-</h3>
<p>suppose if a client asks what the update of the work, so here we need to show our work on a mock server where the client can see the progress of the work instead of directly deploying on the production environment, we create a mock server and push our work(code) there, it's known as continuous delivery.</p>
<p>now we see what is</p>
<h3 id="heading-continuous-deployment">continuous deployment:-</h3>
<p>Hereafter integration we are having our code right which had gone through some automation and build tests,</p>
<p><strong>so from there, the code is deployed directly to the production server, which means after we make a commit it directly hits the production environment(after a few tests).</strong></p>
<p>so if there is any mishap then gone so it's a little scary.</p>
<p>And many companies won't opt for the CD they also Opt for CI.</p>
<p>so for the safe side, they go by manual deployment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673418977178/fa31dadc-e0f6-40cb-ad79-a7dd98e3dbf6.jpeg" alt class="image--center mx-auto" /></p>
<p>now after seeing the above picture hope we people got a good understanding.</p>
<p>And for this cicd process, there are many tools in the market but we look at some famous tools.</p>
<p>They are <strong>GITLAB, JENKINS, AZURE DEVOPS, TRAVIS CI</strong></p>
<p><strong>GITLAB</strong> - A built-in continuous integration and delivery service for GitLab, an open-source git management tool.</p>
<p><strong>JENKINS</strong> - One of the most widely used open-source CI/CD tools. It provides a wide range of plugins for building, testing, and deploying code, and is highly customizable.</p>
<p><strong>AZURE DEVOPS</strong> - A set of cloud-based tools from Microsoft that includes Azure Pipelines, a CI/CD service that supports multiple languages and platforms.</p>
<p><strong>TRAVIS CI</strong> - A cloud-based CI/CD service that is popular among open-source projects. It integrates with GitHub and is easy to set up</p>
<p>But I prefer <strong>GITLAB</strong> because we can do any things in one place, And we can manage source code also since it integrates with git and Git Hub easily.</p>
<p>Now let's get into cicd explanation and what stages are involved.</p>
<p>There will be five stages in cicd namely --</p>
<ul>
<li><p><strong>Source Code Management</strong></p>
</li>
<li><p><strong>Build</strong></p>
</li>
<li><p><strong>Test</strong></p>
</li>
<li><p><strong>Deployment</strong></p>
</li>
<li><p><strong>Monitoring</strong></p>
</li>
</ul>
<p>we see this in detail -</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673443736185/bef230d6-659f-4d06-ad42-f7bbf8883666.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-source-code-management"><strong>Source Code Management</strong></h3>
<p>It is the first stage and it involves pulling code changes from a version control system (VCS) such as Git. This is where the developers push the code for review and testing.</p>
<h3 id="heading-build"><strong>Build</strong></h3>
<p>The second stage is to build the code, typically using a build tool such as Maven or Gradle. This step compiles the code, runs static analysis and unit tests, and produces the necessary artifacts (e.g. jar, war, or executable)</p>
<h3 id="heading-test"><strong>Test</strong></h3>
<p>In this test, we make the code undergo a few automation tests in different environments. Mainly the tests are like unit tests, integration tests etc.</p>
<h3 id="heading-deployment"><strong>Deployment</strong></h3>
<p>If the tests pass, the pipeline progresses to deployment. The artifacts are deployed to a staging environment for further testing by QA teams and then subsequently to production for end-users to use.</p>
<p><em>but here I need to give an introduction to the word</em> <strong><em>pipeline,</em></strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1673443510926/78c9c3a5-fbc1-483e-b530-b450d8a34831.png" alt class="image--center mx-auto" /></p>
<p><em>Basically, the pipeline is typically made up of a series of steps or stages, each of which represents a specific task or set of tasks in the software development process.</em></p>
<p><em>we do all the above things in one place and we call it a pipeline</em>***.***</p>
<h3 id="heading-monitoring"><strong>Monitoring</strong></h3>
<p>This last stage is to monitor the deployed application for performance, errors, and other issues, and to quickly address any problems that arise.</p>
<p>I want to explain the hands-on of this process as well but will do it in another blog coz I don't want to extend this further.</p>
<p><strong><em>HAPPY LEARNING</em></strong> :)</p>
]]></content:encoded></item></channel></rss>