<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Sushil Tiwari]]></title><description><![CDATA[Hey, what's up]]></description><link>https://susiltiwari.com.np</link><generator>RSS for Node</generator><lastBuildDate>Thu, 16 Apr 2026 04:52:27 GMT</lastBuildDate><atom:link href="https://susiltiwari.com.np/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Why Switching a Kubernetes PVC to RWX (and a Different Azure StorageClass) Is Harder Than It Looks]]></title><description><![CDATA[A strange problem showed up while I fixed storage in Kubernetes. The whole thing made little sense at first.
I was trying to make an existing PersistentVolumeClaim (PVC) work with ReadWriteMany (RWX) ]]></description><link>https://susiltiwari.com.np/why-switching-a-kubernetes-pvc-to-rwx-and-a-different-azure-storageclass-is-harder-than-it-looks</link><guid isPermaLink="true">https://susiltiwari.com.np/why-switching-a-kubernetes-pvc-to-rwx-and-a-different-azure-storageclass-is-harder-than-it-looks</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Devops]]></category><category><![CDATA[storage class]]></category><category><![CDATA[storage]]></category><dc:creator><![CDATA[Sushil Tiwari]]></dc:creator><pubDate>Fri, 20 Mar 2026 04:41:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/639ffefe052fc36d4a8f8b4e/a435769a-0bb5-4018-8c10-b971e2b472f2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A strange problem showed up while I fixed storage in Kubernetes. The whole thing made little sense at first.</p>
<p>I was trying to make an <strong>existing PersistentVolumeClaim (PVC)</strong> work with <strong>ReadWriteMany (RWX)</strong> and a <strong>different Azure StorageClass</strong>. At first, it felt like this should be a simple YAML update but it wasn’t that easy.</p>
<h2>What I Tried to Do</h2>
<p>I’d set up a PVC earlier inside a space called <code>demo-ns</code>. This one came into being through:</p>
<ul>
<li><p>A single user can write to it, while others just read. Mode set to allow one writer at a time</p>
</li>
<li><p>Azure Disk (so basically a single-writer style volume)</p>
</li>
</ul>
<p>After a while, my mind shifted toward changing it into:</p>
<ul>
<li><code>ReadWriteMany</code> (RWX) a different azure storageclass supporting rwx</li>
</ul>
<p>Fiddling with the PVC settings took a turn when changes wouldn’t stick. Things just froze up instead of moving forward.</p>
<img src="https://marcbrandner.com/wp-content/uploads/2020/10/rwo_vs_rwm.svg" alt="" style="display:block;margin:0 auto" />

<h2>Here's why it trips people up</h2>
<p>Things get complicated here - two reasons are behind it</p>
<ol>
<li><strong>PVC specs are mostly immutable after creation</strong></li>
</ol>
<p>After a PVC exists - particularly if it's bound - Kubernetes blocks modifications to critical fields such as <code>accessModes</code>. Though set at creation, those settings stay locked by design. Only certain attributes remain adjustable once established. The rest resist updates without exception.</p>
<p>Editing the YAML file fails to behave as anticipated.</p>
<ol>
<li><strong>Storage choice shapes how RWX behaves - the application plays a role too</strong></li>
</ol>
<p>It isn’t enough to just set ReadWriteMany in Kubernetes. The real work happens behind the scenes - your storage system has to allow many nodes reading and writing at once. Without that ability, the label means nothing. What matters is whether the hardware underneath can handle shared access across machines. A setting alone won’t create functionality out of thin air.</p>
<p>A change to a StorageClass with RWX capability? It's needed. Yet even then, altering a current PVC right where it sits won't work.</p>
<p>Even when storage allows RWX, problems might pop up if multiple Pods access the same data folder together - this hits databases hard. One Pod writes while another reads mid-step, chaos follows. Sharing isn’t always safe, especially when timing slips. Data gets tangled fast. Sudden crashes creep in. Locks fail. Files corrupt. All it takes is overlap. Systems assume control - but lose it quick. Conflicts rise without warning. Expect mismatches. Assume nothing stays clean.</p>
<h2>What happened next (the unexpected part)</h2>
<p>Starting fresh, we switched from RWO to RWX. This shift led us to adopt an Azure files(CSI) StorageClass capable of handling RWX mode instead.</p>
<p>Yet the system froze when too many users accessed it at once.</p>
<p>It hit me right then. RWX lets many Pods attach to one volume together, yet opens the door for those same Pods to launch at once - clashing over identical database files without warning.</p>
<h2>How we kept things steady</h2>
<p>Eventually things landed back where they started - basic, quiet, nothing fancy</p>
<p>Back on RWO now. A single pod now handles tasks one after another. Instead of running many at once, it follows a sequence. The approach shifts how work gets done. One job finishes before the next begins. This method limits active pods to just one. Execution becomes more controlled. Timing changes under the new flow. Each step waits its turn.</p>
<p>Usually, the system picks a standard storage type right away. That choice sticks when setting up storage volumes. Changing it later can cause problems. So instead of swapping things out under the hood, it just keeps the original setup running. This avoids breaking what is already connected. The method focuses on stability over changes. Picking one at the start makes everything line up correctly. Once active, it stays that way through the process.</p>
<h2>Takeaway</h2>
<p>Possibly switching the accessMode to <code>ReadWriteMany</code> won’t go smoothly - Kubernetes might step in, or your app could simply refuse. Changing the StorageClass at the same time? That often triggers a halt. Some setups reject the combo outright. Others let it start, then fail quietly. It looks like it should work until it does not. The cluster enforces rules behind the scenes. What seems logical on paper hits invisible walls. Attempts usually end with stalled pods or rejected mounts. Not every storage backend supports shared writes. Even when they do, mismatched settings stop progress. You are left troubleshooting without clear clues.</p>
<p>Most times, changing how storage works involves setting up a fresh PVC. Checking if RWX fits the job matters a lot - databases especially need close attention. What happens next depends on the app's needs, not just default choices.</p>
<h2>References</h2>
<ul>
<li><p><a href="https://kubernetes.io/docs/concepts/storage/storage-classes/">https://kubernetes.io/docs/concepts/storage/storage-classes/</a></p>
</li>
<li><p><a href="https://marcbrandner.com/blog/your-very-own-kubernetes-readwritemany-storage/">https://marcbrandner.com/blog/your-very-own-kubernetes-readwritemany-storage/</a></p>
</li>
<li><p><a href="https://github.com/rancher/local-path-provisioner/issues/70">https://github.com/rancher/local-path-provisioner/issues/70</a></p>
</li>
<li><p><a href="https://medium.com/@golusstyle/demystifying-the-multi-attach-error-for-volume-causes-and-solutions-595a19316a0c">https://medium.com/@golusstyle/demystifying-the-multi-attach-error-for-volume-causes-and-solutions-595a19316a0c</a></p>
</li>
<li><p><a href="https://stackoverflow.com/questions/55474193/volume-is-already-attached-by-pod">https://stackoverflow.com/questions/55474193/volume-is-already-attached-by-pod</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Azure Tutorial: Building Virtual Machines and Managing Network Security Groups]]></title><description><![CDATA[This is the continue of https://hashnode.com/post/cm8v7ntxi000209kw2xpm2tp7
To help you further understand Azure networking, let’s walk through a practical example of creating Virtual Machines (VMs), Network Security Groups (NSGs), and configuring ne...]]></description><link>https://susiltiwari.com.np/azure-tutorial-building-virtual-machines-and-managing-network-security-groups</link><guid isPermaLink="true">https://susiltiwari.com.np/azure-tutorial-building-virtual-machines-and-managing-network-security-groups</guid><category><![CDATA[Azure]]></category><category><![CDATA[azure-devops]]></category><dc:creator><![CDATA[Sushil Tiwari]]></dc:creator><pubDate>Sun, 30 Mar 2025 05:47:17 GMT</pubDate><content:encoded><![CDATA[<p>This is the continue of <a target="_blank" href="https://hashnode.com/post/cm8v7ntxi000209kw2xpm2tp7">https://hashnode.com/post/cm8v7ntxi000209kw2xpm2tp7</a></p>
<p>To help you further understand Azure networking, let’s walk through a practical example of creating <strong>Virtual Machines (VMs)</strong>, <strong>Network Security Groups (NSGs)</strong>, and configuring necessary resources via <strong>Azure CLI</strong>.</p>
<h4 id="heading-step-1-setting-up-your-azure-environment"><strong>Step 1: Setting Up Your Azure Environment</strong></h4>
<p>Before starting, ensure you're logged into your Azure account:</p>
<pre><code class="lang-bash">az login
</code></pre>
<p>Set your desired subscription if necessary:</p>
<pre><code class="lang-bash">az account <span class="hljs-built_in">set</span> --subscription <span class="hljs-string">"your-subscription-name"</span>
</code></pre>
<h4 id="heading-step-2-create-a-resource-group"><strong>Step 2: Create a Resource Group</strong></h4>
<p>A <strong>Resource Group</strong> is needed to organize and manage the Azure resources. Use the following command to create one:</p>
<pre><code class="lang-bash">az group create --name MyResourceGroup --location eastus
</code></pre>
<h4 id="heading-step-3-create-a-virtual-network-vnet-and-subnet"><strong>Step 3: Create a Virtual Network (VNet) and Subnet</strong></h4>
<p>Now, we’ll create a <strong>Virtual Network (VNet)</strong> with a <strong>subnet</strong>:</p>
<pre><code class="lang-bash">az network vnet create --resource-group MyResourceGroup --name MyVNet --address-prefix 10.0.0.0/16 --subnet-name MySubnet --subnet-prefix 10.0.0.0/24
</code></pre>
<p>This command creates a VNet with the address space <code>10.0.0.0/16</code> and a subnet <code>MySubnet</code> within it.</p>
<h4 id="heading-step-4-create-a-network-security-group-nsg"><strong>Step 4: Create a Network Security Group (NSG)</strong></h4>
<p>To ensure proper security, we'll create a <strong>Network Security Group (NSG)</strong>:</p>
<pre><code class="lang-bash">az network nsg create --resource-group MyResourceGroup --name MyNSG
</code></pre>
<h4 id="heading-step-5-define-inbound-traffic-rules-for-nsg"><strong>Step 5: Define Inbound Traffic Rules for NSG</strong></h4>
<p>To allow HTTP traffic (port 80) and deny all other inbound traffic, use the following commands:</p>
<p><strong>Allow HTTP traffic:</strong></p>
<pre><code class="lang-bash">az network nsg rule create --resource-group MyResourceGroup --nsg-name MyNSG --name Allow-HTTP --protocol tcp --priority 100 --destination-port-range 80 --access Allow --direction Inbound
</code></pre>
<p><strong>Deny all other inbound traffic:</strong></p>
<pre><code class="lang-bash">az network nsg rule create --resource-group MyResourceGroup --nsg-name MyNSG --name Deny-All-Inbound --protocol <span class="hljs-string">'*'</span> --priority 200 --access Deny --direction Inbound
</code></pre>
<h4 id="heading-step-6-create-a-public-ip-address"><strong>Step 6: Create a Public IP Address</strong></h4>
<p>Next, we need a <strong>Public IP</strong> for the VM:</p>
<pre><code class="lang-bash">az network public-ip create --resource-group MyResourceGroup --name MyPublicIP --allocation-method Dynamic
</code></pre>
<h4 id="heading-step-7-create-a-network-interface-nic"><strong>Step 7: Create a Network Interface (NIC)</strong></h4>
<p>We will create a <strong>Network Interface (NIC)</strong> and associate it with the <strong>Public IP</strong> and <strong>NSG</strong>:</p>
<pre><code class="lang-bash">az network nic create --resource-group MyResourceGroup --name MyNIC --vnet-name MyVNet --subnet MySubnet --network-security-group MyNSG --public-ip-address MyPublicIP
</code></pre>
<h4 id="heading-step-8-create-the-virtual-machine-vm"><strong>Step 8: Create the Virtual Machine (VM)</strong></h4>
<p>Now, let’s create a <strong>Virtual Machine</strong> and associate it with the <strong>NIC</strong> created earlier:</p>
<pre><code class="lang-bash">az vm create --resource-group MyResourceGroup --name MyVM --nics MyNIC --image UbuntuLTS --admin-username azureuser --admin-password <span class="hljs-string">'YourPasswordHere'</span> --size Standard_B1s --public-ip-address-dns-name myvm-public-ip
</code></pre>
<p>This command creates a <strong>VM</strong> named <code>MyVM</code> running <strong>Ubuntu LTS</strong>, and associates it with <code>MyNIC</code>. You can connect to it using SSH.</p>
<h4 id="heading-step-9-verify-vm-setup"><strong>Step 9: Verify VM Setup</strong></h4>
<p>You can verify the <strong>Public IP</strong> assigned to your VM using the following command:</p>
<pre><code class="lang-bash">az vm show --resource-group MyResourceGroup --name MyVM --query <span class="hljs-string">"publicIps"</span>
</code></pre>
<h4 id="heading-step-10-connect-to-your-vm"><strong>Step 10: Connect to Your VM</strong></h4>
<p>For <strong>Linux VMs</strong>, you can SSH into the VM using the <strong>Public IP</strong> or <strong>DNS name</strong>:</p>
<pre><code class="lang-bash">ssh azureuser@&lt;Public-IP&gt;
</code></pre>
<h4 id="heading-step-11-clean-up-resources"><strong>Step 11: Clean Up Resources</strong></h4>
<p>Once you’re done testing, you can delete all the resources to avoid unnecessary charges:</p>
<pre><code class="lang-bash">az group delete --name MyResourceGroup --yes --no-wait
</code></pre>
<p>This command deletes the <strong>Resource Group</strong> and all resources within it. Thank you :)</p>
]]></content:encoded></item><item><title><![CDATA[Understanding Key Concepts in Azure Networking]]></title><description><![CDATA[Microsoft Azure offers Azure Virtual Network (VNet) as its core networking service, enabling users to create private, secure, and scalable cloud networks. Whether you're setting up cloud-only applications or hybrid architectures, understanding VNets ...]]></description><link>https://susiltiwari.com.np/azure-networking</link><guid isPermaLink="true">https://susiltiwari.com.np/azure-networking</guid><category><![CDATA[azure-network]]></category><category><![CDATA[Azure]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[networking]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Sushil Tiwari]]></dc:creator><pubDate>Sun, 30 Mar 2025 05:39:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743313049072/75cbb7f3-c061-4dfe-b4c8-2dad89192dae.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Microsoft Azure offers <strong>Azure Virtual Network (VNet)</strong> as its core networking service, enabling users to create private, secure, and scalable cloud networks. Whether you're setting up cloud-only applications or hybrid architectures, understanding VNets is crucial.</p>
<p>In addition to virtual networks and connectivity options, Azure provides tools to monitor and manage network traffic, perform load balancing, and ensure secure user connections.</p>
<h2 id="heading-what-is-an-azure-virtual-network-vnet">What is an Azure Virtual Network (VNet)?</h2>
<p>An <strong>Azure Virtual Network (VNet)</strong> is a private, secure network in the cloud that links your Azure resources, such as virtual machines, databases, and apps. It allows you to manage how these resources communicate with each other and the internet, ensuring privacy and security. Essentially, it's like having your own isolated network in the cloud.</p>
<p>Think of it as your <strong>own private, isolated “office”</strong> in the Azure cloud. The resources in this VNet can talk to each other, but they’re shielded from the outside world unless you explicitly allow it.</p>
<h3 id="heading-key-benefits-of-azure-vnets">Key Benefits of Azure VNets</h3>
<ul>
<li><p><strong>Isolation:</strong> Just like a physical network in your office, you can control which resources can communicate with each other. This is done using <strong>subnets</strong>, which divide your VNet into smaller, more manageable sections.</p>
</li>
<li><p><strong>Security:</strong> Using <strong>Network Security Groups (NSGs)</strong>, you can restrict traffic between resources, like a security guard at the entrance of your office only letting authorized people in.</p>
</li>
<li><p><strong>Flexibility:</strong> VNets can easily grow or shrink as your cloud needs change. Add more resources, or scale down based on demand — just like rearranging your office layout to accommodate a growing team.</p>
</li>
<li><p><strong>Connectivity:</strong> You can even connect your VNet to other VNets or extend it to your on-premises office network using <strong>VPN Gateways</strong> or <strong>ExpressRoute</strong> for hybrid cloud solutions.</p>
</li>
</ul>
<p><img src="https://miro.medium.com/v2/resize:fit:1400/0*YPPVPwJA9qWtJ2Jv.png" alt="Diagram showing Azure Virtual Network setup. Users connect to a Security Group, which links to a Virtual Machine within a Subnet. The Azure logo is at the top." class="image--center mx-auto" /></p>
<h2 id="heading-core-concepts-of-azure-networking">Core Concepts of Azure Networking</h2>
<h3 id="heading-1-subnets">1. <strong>Subnets</strong></h3>
<p>Let’s understand this with a example, Let’s think of <strong>subnets</strong> as <strong>different floors or rooms</strong> in your office. Each room has a specific function, and you control who can enter each room. For example:</p>
<ul>
<li><p>One <strong>room (subnet)</strong> could house your <strong>web servers</strong>.</p>
</li>
<li><p>Another room could be reserved for your <strong>database servers</strong>.</p>
</li>
<li><p>A third room might host your <strong>application servers</strong>.</p>
</li>
</ul>
<p><strong>VNets</strong> can be divided into <strong>subnets</strong>, which help segregate resources logically and apply specific security policies. Common subnet use cases include:</p>
<ul>
<li><p><strong>Web Tier Subnet</strong>: Hosts frontend applications.</p>
</li>
<li><p><strong>App Tier Subnet</strong>: Runs backend services and APIs.</p>
</li>
<li><p><strong>Database Subnet</strong>: Stores and processes data with restricted access.</p>
</li>
</ul>
<h3 id="heading-2-application-gateway"><strong>2. Application Gateway</strong></h3>
<p>Now, imagine that your office has a <strong>receptionist</strong> who directs visitors to the right department. <strong>Azure Application Gateway</strong> works in a similar way—it is a <strong>traffic manager</strong> for your web applications. It ensures that web requests are routed efficiently to the right backend services.</p>
<p>It operates at the <strong>application layer (Layer 7)</strong>, and it’s responsible for:</p>
<ul>
<li><p><strong>Load balancing</strong> web traffic to different servers.</p>
</li>
<li><p><strong>SSL termination</strong>, meaning it handles encrypted connections to improve performance.</p>
</li>
<li><p><strong>URL-based routing</strong>, sending traffic to specific backend pools based on the URL. For example:</p>
<ul>
<li><p>Traffic going to <a target="_blank" href="http://www.yourcompany.com/frontend/*"><code>www.yourcompany.com/frontend/*</code></a> can be routed to the <strong>frontend servers</strong>.</p>
</li>
<li><p>Traffic going to <a target="_blank" href="http://www.yourcompany.com/api/*"><code>www.yourcompany.com/api/*</code></a> can be routed to the <strong>backend servers</strong>.</p>
</li>
</ul>
</li>
</ul>
<p>It can also provide <strong>security</strong> through a <strong>Web Application Firewall (WAF)</strong>, protecting your application from threats like SQL injection and cross-site scripting (XSS).</p>
<h3 id="heading-2-ip-addressing">2. <strong>IP Addressing</strong></h3>
<p>When you assign an IP address to a resource in your <strong>VNet</strong>, it's like giving each phone in your office a unique extension number. With both <strong>private IPs</strong> (for communication within the VNet) and <strong>public IPs</strong> (for communication with the outside world), you can control how each resource interacts externally.</p>
<p>Each <strong>VNet</strong> needs an <strong>address space</strong> (CIDR block) that sets its <strong>IP range</strong>. These addresses can be assigned as:</p>
<ul>
<li><p><strong>Private IPs</strong> (for internal communication)</p>
</li>
<li><p><strong>Public IPs</strong> (for internet-facing services)</p>
</li>
</ul>
<h3 id="heading-3-network-security-groups-nsgs">3. <strong>Network Security Groups (NSGs)</strong></h3>
<p>Think of <strong>NSGs</strong> as the <strong>security team</strong> for your office network. They define who can access what. Want to restrict access to certain resources? An <strong>NSG</strong> can enforce these security rules to only allow certain types of traffic, ensuring <strong>unauthorized users</strong> don’t enter <strong>restricted areas</strong>.</p>
<p>NSGs act as <strong>firewalls</strong> for filtering inbound and outbound traffic based on rules. Example rules:</p>
<ul>
<li><p>Allow HTTP (80) traffic to web servers</p>
</li>
<li><p>Restrict access to database subnet from the public internet</p>
</li>
</ul>
<h3 id="heading-4-azure-firewall">4. <strong>Azure Firewall</strong></h3>
<p>It works like the firewall at your office's <strong>main entrance</strong>, filtering <strong>incoming</strong> and <strong>outgoing</strong> traffic to protect against potential threats. It add’s a additional layer of protection to the network and resources.</p>
<h3 id="heading-5-routing-amp-user-defined-routes-udrs">5. <strong>Routing &amp; User Defined Routes (UDRs)</strong></h3>
<p>Just like your office has clearly defined paths for communication between departments, <strong>User Defined Routes (UDRs)</strong> ensure that data flows efficiently between resources within and outside your VNet. You can customize these routes to ensure the right connections are made.</p>
<p>By default, Azure automatically routes traffic between subnets and connected networks. However, you can define <strong>User Defined Routes (UDRs)</strong> to customize how traffic flows.</p>
<h3 id="heading-6-load-balancer">6. <strong>Load Balancer</strong></h3>
<p>If you're running a high-traffic website or application, the <strong>Azure Load Balancer</strong> helps distribute traffic evenly across your VNet's resources. This Azure networking service automatically adjusts itself when administrators scale an instance..</p>
<h3 id="heading-7-expressroute">7. ExpressRoute</h3>
<p>Azure ExpressRoute is a networking service that creates a <strong>private connection</strong> between a company’s on-premises infrastructure and Microsoft Azure. Unlike the public internet, this private link offers <strong>lower latency</strong> and <strong>greater reliability</strong>.</p>
<p><strong>ExpressRoute</strong> supports four connectivity models:</p>
<ul>
<li><p>CloudExchange Colocation</p>
</li>
<li><p>Point-to-point Ethernet Connection</p>
</li>
<li><p>Any-to-any Connection</p>
</li>
<li><p>ExpressRoute Direct</p>
</li>
</ul>
<h3 id="heading-8-azure-network-watcher">8. <strong>Azure Network Watcher</strong></h3>
<p><strong>Azure Network Watcher</strong> is a monitoring tool that provides insights into your Azure network resources. It helps IT teams track, diagnose, and analyze network performance by offering tools to view metrics, logs, and resource interconnections. Unlike individual resource monitoring, Network Watcher gives a comprehensive view of IaaS products, such as <strong>Azure VMs</strong> and <strong>VNets</strong>. Charges apply based on the features used.</p>
<h3 id="heading-9-content-delivery-network">9. <strong>Content Delivery Network</strong></h3>
<p><strong>Azure CDN</strong> is a service that delivers high-bandwidth content, like documents and files, through cached copies stored at <strong>edge locations</strong> around the world. This ensures content is delivered <strong>closer to end users</strong>, minimizing latency. While Azure CDN optimizes content delivery, it focuses more on performance and less on load balancing and security compared to <strong>Azure Front Door</strong>. It's ideal for serving <strong>static content</strong> quickly and efficiently.</p>
]]></content:encoded></item><item><title><![CDATA[Kubernetes Init Containers]]></title><description><![CDATA[What is an Init Container?
As Kubernetes pods can have more than one container.Init containers are specialized containers that run before app containers in a Kubernetes Pod. Unlike regular containers, init containers must complete successfully before...]]></description><link>https://susiltiwari.com.np/kubernetes-init-containers</link><guid isPermaLink="true">https://susiltiwari.com.np/kubernetes-init-containers</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[k8s]]></category><category><![CDATA[containers]]></category><category><![CDATA[init-container]]></category><dc:creator><![CDATA[Sushil Tiwari]]></dc:creator><pubDate>Sun, 02 Mar 2025 07:39:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/GSiEeoHcNTQ/upload/0119aa2f437affc807da81470e8ee677.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-an-init-container"><strong>What is an Init Container?</strong></h2>
<p>As Kubernetes pods can have more than one container.<br /><strong>Init containers</strong> are specialized containers that run before app <strong>containers</strong> in a Kubernetes Pod. Unlike regular containers, init containers must complete successfully before the main application containers can start. This sequential execution pattern makes them perfect for setup tasks.</p>
<h2 id="heading-why-use-init-containers">Why Use Init Containers?</h2>
<ol>
<li><p>Separating initialization logic from application code</p>
</li>
<li><p>Delaying application startup until dependencies are ready</p>
</li>
<li><p>Running setup operations with different security contexts</p>
</li>
<li><p>Pre-populating volumes with configuration or data</p>
</li>
</ol>
<h2 id="heading-how-init-containers-work"><strong>How Init Containers Work?</strong></h2>
<p>Before jumping into the example, let’s see how it works step by step.</p>
<ol>
<li><p>When a Pod is created, the <strong>kubelet</strong> initializes all volumes and networks</p>
</li>
<li><p><strong>Kubelet</strong> starts init containers <strong>sequentially</strong> in the order defined in <strong>Pod spec</strong></p>
</li>
<li><p>Each init container must terminate completely before the next one starts</p>
</li>
<li><p>The <strong>kubelet</strong> waits for each init container to have exit code <strong>0</strong> before proceeding</p>
</li>
<li><p>If an init container fails, <strong>kubelet</strong> restarts it according to the Pod's <code>restartPolicy</code></p>
</li>
<li><p>Once all init containers complete successfully, kubelet <strong>initializes</strong> and starts all app containers simultaneously</p>
</li>
<li><p>Status of init containers is <strong>exposed</strong> in the Pod's <code>.status.initContainerStatuses</code> field</p>
</li>
<li><p>Init containers rerun completely during Pod <strong>restarts</strong> (unlike regular containers)</p>
</li>
<li><p>Kubelet retains init container logs until the Pod is deleted, even after container termination</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740900322258/e95dd1c4-6da9-4656-9b00-038a35e07dee.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-lets-see-a-example-django-with-postgresql">Let’s see a example: Django with PostgreSQL</h2>
<p>In this example, Let’s start with the scenario where PostgreSQL database is ready before starting your Django application.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">django</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">initContainers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">wait-for-postgres</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">postgres:13</span>
    <span class="hljs-attr">command:</span> [
        <span class="hljs-string">'sh'</span>,
        <span class="hljs-string">'-c'</span>,
        <span class="hljs-string">'until pg_isready -h postgres-service -p 5432; do echo waiting for postgres; sleep 2; done;'</span>
    ]

  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">django</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">&lt;Image-name&gt;:&lt;Tag&gt;</span>
    <span class="hljs-attr">env:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DATABASE_URL</span>
      <span class="hljs-attr">value:</span> <span class="hljs-string">"postgres://user:password@postgres-service:5432/app_db"</span>
    <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">8000</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:1.21</span>
    <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">volumeMounts:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-config</span>
      <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/etc/nginx/conf.d/</span>
  <span class="hljs-attr">volumes:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-config</span>
    <span class="hljs-attr">configMap:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-config</span>
</code></pre>
<p>In this example, the init container uses <code>pg_isready</code> from the official PostgreSQL image to check if the database is accepting connections. The Django application and Nginx containers only start after this verification succeeds.</p>
<h3 id="heading-lets-deploy-this-pod">Let’s deploy this pod.</h3>
<p>Save the YAML as <code>web-app.yaml</code> and apply it:</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span> <span class="hljs-string">web-app.yaml</span>
</code></pre>
<p>Check the pod status:</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">get</span> <span class="hljs-string">pods</span>
</code></pre>
<p>You’ll see the init container running first. Once it’s done, your app starts.</p>
<h2 id="heading-resources-and-volumes-for-init-containers">Resources and Volumes for Init Containers</h2>
<p>Init containers can define resources and access volumes just like regular containers, with some important considerations:</p>
<ul>
<li><p><strong>Resource Requests and Limits</strong></p>
<ul>
<li><p>The highest resource request/limit for each resource across all init containers is used</p>
</li>
<li><p>The Pod's effective request/limit is the higher of the sum of app containers and the highest init container</p>
</li>
<li><p>This ensures init containers have enough resources without permanently reserving them</p>
</li>
</ul>
</li>
<li><p><strong>Volume Usage</strong></p>
<ul>
<li><p>Init containers can mount and modify volumes that app containers will later use</p>
</li>
<li><p>Volumes created by init containers are available to app containers</p>
</li>
<li><p>Changes made to shared volumes by init containers persist for app containers</p>
</li>
<li><p>Empty directories and persistent volumes can be shared across container</p>
</li>
</ul>
</li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">django</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">initContainers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">wait-for-postgres</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">postgres:13</span>
    <span class="hljs-attr">command:</span> [<span class="hljs-string">'sh'</span>, <span class="hljs-string">'-c'</span>, <span class="hljs-string">'pg_isready -h postgres-service &amp;&amp; echo "DB schema initialized" &gt; /data/status.txt'</span>]
    <span class="hljs-attr">resources:</span>
      <span class="hljs-attr">limits:</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">"256Mi"</span>
        <span class="hljs-attr">cpu:</span> <span class="hljs-string">"500m"</span>
      <span class="hljs-attr">requests:</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">"128Mi"</span>
        <span class="hljs-attr">cpu:</span> <span class="hljs-string">"250m"</span>
    <span class="hljs-attr">volumeMounts:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">shared-data</span>
      <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/data</span>

  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">django</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">&lt;Image&gt;:&lt;Tag&gt;</span>
    <span class="hljs-attr">resources:</span>
      <span class="hljs-attr">limits:</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">"512Mi"</span>
        <span class="hljs-attr">cpu:</span> <span class="hljs-string">"500m"</span>
      <span class="hljs-attr">requests:</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">"256Mi"</span>
        <span class="hljs-attr">cpu:</span> <span class="hljs-string">"250m"</span>
    <span class="hljs-attr">volumeMounts:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">shared-data</span>
      <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/app/data</span>

  <span class="hljs-attr">volumes:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">shared-data</span>
    <span class="hljs-attr">emptyDir:</span> {}
</code></pre>
<h2 id="heading-init-container-vs-sidecar-container">Init container vs Sidecar container</h2>
<ol>
<li><p>Init Container performs tasks that need to be completed before the main container can start, while Sidecar Container provides supplementary functionality to the main container.</p>
</li>
<li><p>Init Container runs to completion and must succeed before the main container starts, while Sidecar Container runs continuously alongside the main container.</p>
</li>
<li><p>Init Container is used for setup tasks like waiting for dependencies, fetching configs, or running migrations, while Sidecar Container is used for tasks like logging, monitoring, or proxying.</p>
</li>
<li><p>Init Container always runs first in the pod lifecycle, while Sidecar Container runs concurrently with the main container.</p>
</li>
<li><p>If an Init Container fails, the pod restarts it until it succeeds, while a failing Sidecar Container may or may not impact the main container depending on its role.</p>
</li>
<li><p>Init Container does not run simultaneously with the main container, while Sidecar Container shares resources and runs in parallel with the main container.</p>
</li>
</ol>
<p>So, Init containers always run to completion and each init container must complete successfully before the next one begins. If any init container fails, Kubernetes restarts the Pod repeatedly until initialization succeeds and Init containers can access secrets, while preventing app containers from having these privileges. Thank you.  </p>
<p>References:</p>
<p><a target="_blank" href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
<p><a target="_blank" href="https://www.loft.sh/blog/kubernetes-init-containers">https://www.loft.sh/blog/kubernetes-init-containers</a></p>
<p><a target="_blank" href="https://www.alibabacloud.com/blog/kubernetes-init-containers_594725">https://www.alibabacloud.com/blog/kubernetes-init-containers_594725</a></p>
]]></content:encoded></item><item><title><![CDATA[Python Unit Testing: Mocking]]></title><description><![CDATA[Mocking means replacing real objects with pretend ones during testing, especially in unit tests. This technique lets to create different scenarios without using real resources, which saves time and effort. The Mock() object from the unittest.mock cla...]]></description><link>https://susiltiwari.com.np/python-unit-testing-mocking</link><guid isPermaLink="true">https://susiltiwari.com.np/python-unit-testing-mocking</guid><category><![CDATA[magicmock]]></category><category><![CDATA[Mocking]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Python]]></category><category><![CDATA[unit testing]]></category><category><![CDATA[pytest]]></category><category><![CDATA[side effect]]></category><category><![CDATA[patching]]></category><category><![CDATA[patch]]></category><dc:creator><![CDATA[Sushil Tiwari]]></dc:creator><pubDate>Wed, 19 Jun 2024 10:51:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1718806171084/09282d9c-b29f-4931-9f7b-51df1e487f7c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Mocking means replacing real objects with pretend ones during testing, especially in unit tests. This technique lets to create different scenarios without using real resources, which saves time and effort. The Mock() object from the unittest.mock class is crucial here. It allows to define how these fake objects should behave and helps catch potential bugs or errors in the code.</p>
<h2 id="heading-mock-object">Mock() object</h2>
<p>The Mock() object from the unittest.mock class is at the heart of mocking in Python. It serves as a versatile tool for creating mock objects that behave like real ones but don't have actual functionality. We can set attributes, define return values for methods, and even assert how these mock objects are used in tests.</p>
<p><strong>Example:</strong></p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> unittest.mock <span class="hljs-keyword">import</span> Mock

<span class="hljs-comment"># Create a mock object</span>
mock_obj = Mock()

<span class="hljs-comment"># Set an attribute</span>
mock_obj.attribute = <span class="hljs-string">'value'</span>

<span class="hljs-comment"># Define a method's return value</span>
mock_obj.some_method.return_value = <span class="hljs-number">10</span>

<span class="hljs-comment"># Usage in a test scenario</span>
result = mock_obj.some_method()
<span class="hljs-keyword">assert</span> result == <span class="hljs-number">10</span>
</code></pre>
<p>mock_obj is a mock object where attribute is set to 'value', and some_method() is mocked to return 10. This allows to simulate different behaviors without accessing real resources.</p>
<h2 id="heading-patch">patch()</h2>
<p>The patch function from unittest.mock acts as a decorator or a context manager. It temporarily replaces objects or functions during testing, allowing to control their behavior within a specific scope.</p>
<p><strong>Example:</strong></p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> unittest.mock <span class="hljs-keyword">import</span> patch

<span class="hljs-comment"># Example of patching a function</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">original_function</span>():</span>
    <span class="hljs-keyword">return</span> <span class="hljs-string">'Real function'</span>

<span class="hljs-meta">@patch('__main__.original_function')</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_mocked_function</span>(<span class="hljs-params">mock_function</span>):</span>
    mock_function.return_value = <span class="hljs-string">'Mocked function'</span>
    result = original_function()
    <span class="hljs-keyword">assert</span> result == <span class="hljs-string">'Mocked function'</span>
</code></pre>
<p>patch temporarily replaces original_function with a mock version defined within test_mocked_function. This ensures that the test test_mocked_function runs with the mocked behavior, validating expected outcomes without altering the original function's behavior outside the test.</p>
<h2 id="heading-sideeffect">side_effect</h2>
<p>The side_effect attribute of a mock object allows to define what happens when the mock object is called. It can be set to a function, an exception, or a sequence of return values, making it versatile for testing different scenarios.</p>
<p><strong>Example:</strong></p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> unittest.mock <span class="hljs-keyword">import</span> MagicMock

<span class="hljs-comment"># Example using side_effect with MagicMock</span>
mock = MagicMock()

<span class="hljs-comment"># Define side_effect as a sequence</span>
mock.side_effect = [<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>]

print(mock())   <span class="hljs-comment"># Output: 1</span>
print(mock())   <span class="hljs-comment"># Output: 2</span>
print(mock())   <span class="hljs-comment"># Output: 3</span>
</code></pre>
<p>mock() is called three times, and each time it returns the next value from the side_effect list [1, 2, 3]. This demonstrates how side_effect can be used to simulate different return values sequentially during testing.</p>
<h2 id="heading-magicmock">Magicmock()</h2>
<p>MagicMock() is a subclass of Mock() that automatically provides all attributes and methods. It's useful when you need a mock object that behaves like a real one without needing to manually define every aspect.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> unittest.mock <span class="hljs-keyword">import</span> MagicMock

<span class="hljs-comment"># Example of MagicMock usage</span>
mock_obj = MagicMock()

<span class="hljs-comment"># Set a method's return value</span>
mock_obj.some_method.return_value = <span class="hljs-string">'Mocked result'</span>

result = mock_obj.some_method()
print(result)   <span class="hljs-comment"># Output: 'Mocked result'</span>
</code></pre>
<p>mock_obj is a MagicMock object where some_method() is mocked to return 'Mocked result'. This simplifies mocking by automatically providing all necessary attributes and methods of the mocked object.</p>
<p>By leveraging these mocking techniques in Python, It helps to effectively isolate components for testing, simulate diverse scenarios, and ensure robustness in their codebases.</p>
]]></content:encoded></item><item><title><![CDATA[N+1 problem in Django?]]></title><description><![CDATA[In this blog, we'll delve into the N+1 problem in Django, explore its causes and implications, and provide practical solutions to optimize your application's performance. Let's get started!
What is the N+1 Problem?
The N+1 problem occurs when an appl...]]></description><link>https://susiltiwari.com.np/n1-problem-in-django</link><guid isPermaLink="true">https://susiltiwari.com.np/n1-problem-in-django</guid><category><![CDATA[Django]]></category><category><![CDATA[N+1]]></category><category><![CDATA[orm]]></category><dc:creator><![CDATA[Sushil Tiwari]]></dc:creator><pubDate>Fri, 02 Jun 2023 01:56:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1685634601070/a689da00-8c98-4421-83aa-2077921a9999.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this blog, we'll delve into the N+1 problem in Django, explore its causes and implications, and provide practical solutions to optimize your application's performance. Let's get started!</p>
<h3 id="heading-what-is-the-n1-problem"><strong>What is the N+1 Problem?</strong></h3>
<p>The N+1 problem occurs when an application makes N+1 database queries to fetch related data, where N represents the number of primary objects. This leads to inefficient database queries, resulting in poor performance and increased response times. To illustrate this problem, let's consider a simple example.</p>
<p>Imagine you have a Django model called "<strong>Author</strong>" that has a foreign key relationship with a model called "<strong>Book</strong>."</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> django.db <span class="hljs-keyword">import</span> models

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Author</span>(<span class="hljs-params">models.Model</span>):</span>
    name = models.CharField(max_length=<span class="hljs-number">100</span>)

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Book</span>(<span class="hljs-params">models.Model</span>):</span>
    title = models.CharField(max_length=<span class="hljs-number">100</span>)
    author = models.ForeignKey(
                Author,
                on_delete=models.CASCADE,
                related_name=<span class="hljs-string">"books"</span> )
</code></pre>
<p>Now, suppose you want to retrieve a list of <strong>authors</strong> along with their respective <strong>books</strong>.</p>
<pre><code class="lang-python">authors = Author.objects.all()

<span class="hljs-keyword">for</span> author <span class="hljs-keyword">in</span> authors:
    books = Book.objects.filter(author=author)
</code></pre>
<p>In a naive implementation, you might iterate over the authors and fetch their associated books one by one, resulting in <strong>N+1</strong> queries.</p>
<h3 id="heading-impact-on-performance"><strong>Impact on Performance:</strong></h3>
<p>The N+1 problem can have a significant impact on the performance of your Django application. Each additional query introduces a round trip to the database, causing latency and slowing down the overall response time. As the number of primary objects increases, the problem exacerbates, leading to a substantial performance degradation.</p>
<h3 id="heading-how-to-solve-the-n1-problem-in-django"><strong>How to Solve the N+1 Problem in Django?</strong></h3>
<p>Fortunately, Django provides several powerful techniques to solve the N+1 problem and optimize database queries. Let's explore some effective solutions.</p>
<ol>
<li><p><strong>Select_related():</strong></p>
<p> One of the simplest ways to mitigate the N+1 problem is through eager loading. Django's ORM provides the <code>select_related()</code> method, which allows you to fetch related objects in a single query.</p>
<p> To apply eager loading in our example, modify the code as follows:</p>
<pre><code class="lang-python"> authors = Author.objects.select_related(<span class="hljs-string">'book'</span>).all()

 <span class="hljs-keyword">for</span> author <span class="hljs-keyword">in</span> authors:
     books = author.book_set.all()
</code></pre>
<p> In our previous example, instead of fetching the books one by one, you can modify the query to prefetch the related books using <code>select_related('book')</code>. This way, Django fetches the authors and their books in a single query, eliminating the N+1 problem.</p>
</li>
<li><p><strong>Prefetch_related():</strong></p>
<p> In scenarios where the relationship is more complex, and eager loading alone may not be sufficient, Django offers the <code>prefetch_related()</code> method. This method efficiently fetches the related objects in a separate query, minimizing the impact of the N+1 problem. By using <code>prefetch_related('books')</code>, Django retrieves all the books associated with the authors using a single query, resulting in improved performance.</p>
<p> Consider the following example using <code>prefetch_related()</code>:</p>
<pre><code class="lang-python"> authors = Author.objects.prefetch_related(<span class="hljs-string">'books'</span>).all()

 <span class="hljs-keyword">for</span> author <span class="hljs-keyword">in</span> authors:
     books = author.books.all()
</code></pre>
<p> <strong>Important:</strong> It's worth noting that the use of <code>select_related()</code> is more suitable for foreign key and one-to-one relationships, while <code>prefetch_related()</code> is recommended for many-to-many and many-to-one relationships or scenarios involving more complex relationships.</p>
</li>
<li><p><strong>Use Annotations and Aggregations:</strong></p>
<p> By leveraging these features, you can reduce the number of database queries and optimize performance. For instance, you can annotate the queryset with the count of related books using <code>annotate(num_books=Count('books'))</code>. This way, you can fetch both authors and the count of their books in a single query.</p>
</li>
</ol>
<p>You can find these techniques in the <a target="_blank" href="https://docs.djangoproject.com/en/4.2/ref/models/querysets/#select-related">Django Documentation</a>.</p>
<p>Thanks for reading.👋</p>
]]></content:encoded></item><item><title><![CDATA[Cheat sheet of important Docker commands]]></title><description><![CDATA[Before jumping into the topic, If you're new to Docker and looking for effective ways to manage Docker containers, you're in luck! Docker commands provide a solution to help you achieve just that.
Docker is a widely used platform that enables develop...]]></description><link>https://susiltiwari.com.np/cheat-sheet-of-important-docker-commands</link><guid isPermaLink="true">https://susiltiwari.com.np/cheat-sheet-of-important-docker-commands</guid><category><![CDATA[Docker]]></category><category><![CDATA[docker images]]></category><category><![CDATA[docker cheat sheet]]></category><dc:creator><![CDATA[Sushil Tiwari]]></dc:creator><pubDate>Sat, 20 May 2023 16:03:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1684594462093/a858e758-5a1c-4e03-a23b-cb911172f89c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Before jumping into the topic, If you're new to Docker and looking for effective ways to manage Docker containers, you're in luck! Docker commands provide a solution to help you achieve just that.</p>
<p>Docker is a widely used platform that enables developers to package their applications, along with their dependencies and configurations, into containers that can seamlessly run across various environments.</p>
<p>With Docker commands, you gain the power to effortlessly <strong>create</strong>, <strong>run</strong>, <strong>stop</strong>, <strong>remove</strong>, and manage Docker containers. These commands prove to be invaluable in automating and streamlining the process of deploying and managing your applications within a containerized environment. By leveraging Docker commands, you can ensure a smooth and efficient workflow for your container-based applications.</p>
<p>You can easily visit the <a target="_blank" href="https://docs.docker.com/get-started/overview/">Docker Documentation</a> for more information.</p>
<h3 id="heading-docker-version"><strong>Docker version :</strong></h3>
<p>The docker version displays the installed versions of the Docker, Nothing more than that.</p>
<pre><code class="lang-plaintext">docker version
</code></pre>
<h3 id="heading-docker-search"><strong>Docker search :</strong></h3>
<p>"docker search" finds and displays relevant Docker images from registries, helping you discover and choose base containers for your applications.</p>
<pre><code class="lang-plaintext">docker search nginx
</code></pre>
<h3 id="heading-docker-pull"><strong>Docker pull :</strong></h3>
<p>The "docker ps" command is valuable for efficiently checking the status and essential information of your active containers. It enables effective container monitoring and management, including tasks like identifying running containers, restarting or stopping specific containers, and inspecting their configurations.</p>
<p>To see the previously stopped containers use the option <em>“-a”</em></p>
<pre><code class="lang-plaintext">sudo docker ps 
#you can user the command to get the images status
sudo docker ps -a
</code></pre>
<h3 id="heading-docker-run"><strong>Docker run :</strong></h3>
<p>Create and start a container from an image.</p>
<pre><code class="lang-plaintext">docker run &lt;image&gt;
</code></pre>
<p>To run a container in the background and regain control of the current terminal, use the "-d" option with the "docker run" command.</p>
<pre><code class="lang-plaintext">docker run -d &lt;image_name&gt;
</code></pre>
<p>To specify specific ports for Docker containers, use the "-p" option to map host ports to container ports, enabling communication between the host machine and the container.</p>
<pre><code class="lang-plaintext">docker run -p &lt;host_port_number&gt;:&lt;container_port&gt; &lt;image_name&gt;
</code></pre>
<p>To give the name to your container, use the option <em>“ — name”.</em></p>
<pre><code class="lang-plaintext">docker run -p &lt;host_port_number&gt;:&lt;container_port&gt; &lt;image_name&gt; --name docker-image
</code></pre>
<h3 id="heading-docker-stop">Docker stop :</h3>
<p>Gracefully stop a running container.</p>
<pre><code class="lang-plaintext"> docker stop &lt;container&gt;
</code></pre>
<h3 id="heading-docker-rm">Docker rm :</h3>
<p>Remove one or more stopped containers.</p>
<pre><code class="lang-plaintext">docker rm &lt;container&gt;
</code></pre>
<h3 id="heading-docker-images">Docker images :</h3>
<p>List available Docker images on your system.</p>
<pre><code class="lang-plaintext"> docker images
</code></pre>
<h3 id="heading-docker-rmi">Docker rmi :</h3>
<p>It is used to remove one or more Docker images from your system. You need to stop the running container to delete the image.</p>
<pre><code class="lang-plaintext">docker rmi &lt;image1&gt; &lt;image2&gt; &lt;image3&gt;
or,
docker image prune -a
</code></pre>
<h3 id="heading-docker-build">Docker build :</h3>
<p>Build a Docker image from a Docker file.</p>
<pre><code class="lang-plaintext">docker build &lt;path/to/Dockerfile&gt;
</code></pre>
<h3 id="heading-docker-compose-up">Docker-compose up :</h3>
<p>Start containers defined in a Docker Compose file.</p>
<pre><code class="lang-plaintext">docker-compose up
</code></pre>
<h3 id="heading-docker-compose-down">Docker-compose down :</h3>
<p>Stop and remove containers defined in a Docker Compose file.</p>
<pre><code class="lang-plaintext">docker-compose down
</code></pre>
<h3 id="heading-docker-network-ls">Docker network ls :</h3>
<p>List Docker networks on your system.</p>
<pre><code class="lang-plaintext">docker network ls
</code></pre>
<h3 id="heading-docker-stats">Docker stats :</h3>
<p>Display real-time resource usage statistics of running containers.</p>
<pre><code class="lang-plaintext">docker stats
</code></pre>
<h3 id="heading-docker-logs">Docker Logs :</h3>
<p>Display the logs of a specific container</p>
<pre><code class="lang-plaintext">docker logs &lt;container&gt;
</code></pre>
<p>And with that, we've covered the majority of the important commands in this post. If there are any commands I missed, feel free to mention them in the comments. That concludes our discussion for now. Happy Dockerizing!</p>
]]></content:encoded></item><item><title><![CDATA[Dockerizing Django With Postgres, NGINX, and Gunicorn (PART-2)]]></title><description><![CDATA[If you haven't checked part-1 of this topic.
Please Checkout Dockerizing Django With Postgres, NGINX, and Gunicorn (PART-1)
In this tutorial, We'll be configuring our Postgres in our Django application.So,Let's get started.
Checkout the Git repositor...]]></description><link>https://susiltiwari.com.np/dockerizing-django-with-postgres-nginx-and-gunicorn-part-2</link><guid isPermaLink="true">https://susiltiwari.com.np/dockerizing-django-with-postgres-nginx-and-gunicorn-part-2</guid><category><![CDATA[django rest framework]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[nginx]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Sushil Tiwari]]></dc:creator><pubDate>Mon, 01 May 2023 14:34:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1682945305773/ea759414-6c81-44c7-825f-be2b5a8ad4e2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you haven't checked part-1 of this topic.</p>
<p>Please Checkout <a target="_blank" href="https://hashnode.com/post/clgsx8fpm000d09kwfv0bffkw">Dockerizing Django With Postgres, NGINX, and Gunicorn (PART-1)</a></p>
<p>In this tutorial, We'll be configuring our Postgres in our Django application.So,Let's get started.</p>
<p>Checkout the Git repository for reference: <a target="_blank" href="https://github.com/susilnem/docker-drf-postgresql-gunicorn-nginx">Git Repository</a></p>
<h2 id="heading-postgres"><strong>Postgres</strong></h2>
<p>In order to set up Postgres, we will have to perform the following steps:</p>
<ul>
<li><p>Include a new service to the docker-compose.yml file</p>
</li>
<li><p>Modify the Django settings</p>
</li>
<li><p>Install Psycopg2 package</p>
</li>
</ul>
<p>Let's update the docker-compose.yml file</p>
<pre><code class="lang-plaintext">version: '3.10'

services:
  web:
    build: .
    command: python manage.py runserver 0.0.0.0:8000
    volumes:
      - static_data:/app/static
    ports:
      - "8000:8000"
    restart: always
    env_file:
      - ./.env
    depends_on:
      - db
  db:
    image: postgres:13.0-alpine
    restart: always
    volumes:
      - postgres_data:/var/lib/postgresql/data:rw
    env_file:
      - .env
#or, use environment variable directly
    # environment:
    #    - POSTGRES_USER=${DB_USERNAME}
    #    - POSTGRES_PASSWORD=${DB_PASSWORD}
    #    - POSTGRES_DB=${DB_NAME}
volumes:
  static_data:
  postgres_data:
</code></pre>
<p>To ensure that the data is retained beyond the container's lifespan, we set up a volume configuration that maps the "postgres_data" directory to the <code>"/var/lib/postgresql/data/"</code> directory inside the container.</p>
<p>To properly configure the web service, it is necessary to update the <code>.env</code> file with additional environment variables.</p>
<pre><code class="lang-plaintext">SECRET_KEY=
ALLOWED_HOSTS= localhost 127.0.0.1 [::1]
DEBUG=True

# Database
DB_NAME=testing
DB_USERNAME=postgres
DB_PASSWORD=36050
DB_HOSTNAME=localhost
DB_PORT=5432
</code></pre>
<p>Update the <code>DATABASES</code> setting in your <code>settings.py</code> file with the following code</p>
<pre><code class="lang-plaintext">DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': config('DB_NAME'),
        'USER': config('DB_USERNAME'),
        'PASSWORD': config('DB_PASSWORD'),
        'HOST': config('DB_HOSTNAME'),
        'PORT': config('DB_PORT', cast=int),
    }
}
</code></pre>
<p>Next, we will modify the Dockerfile to include the necessary packages for Psycopg2 installation.</p>
<pre><code class="lang-plaintext"># official base image
FROM python:3.10.9-alpine3.17

#set work directory
RUN mkdir /app
WORKDIR /app

#set environment variable
ENV PYTHONDONTWRITEBYCODE 1
ENV PYTHONUNBUFFERED 1

#install pyscopg2 dependencies
RUN apk update &amp;&amp; apk add postgresql-dev gcc python3-dev musl-dev linux-headers

#install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt

# copy project
COPY . .
</code></pre>
<p>Make sure to install <a target="_blank" href="https://www.psycopg.org/">psycopg</a>,create Database and add psycopg to your requirement.txt file using <code>pip freeze &gt; requirements.txt</code></p>
<p>After that, Build the new image with two services:</p>
<pre><code class="lang-plaintext">$ docker-compose up -d --build
</code></pre>
<p>Then run the migrations:</p>
<pre><code class="lang-plaintext">$ docker-compose exec app python manage.py migrate --noinput
</code></pre>
<p>You can check that the volume was created as well by running:</p>
<pre><code class="lang-plaintext">$ docker volume inspect django-on-docker_postgres_data
</code></pre>
<p>You should see something similar to:</p>
<pre><code class="lang-plaintext">[
    {
        "CreatedAt": "2021-08-23T15:49:08Z",
        "Driver": "local",
        "Labels": {
            "com.docker.compose.project": "django-on-docker",
            "com.docker.compose.version": "1.29.2",
            "com.docker.compose.volume": "postgres_data"
        },
        "Mountpoint": "/var/lib/docker/volumes/django-on-docker_postgres_data/_data",
        "Name": "django-on-docker_postgres_data",
        "Options": null,
        "Scope": "local"
    }
]
</code></pre>
<p>Afterward, create a new file named <code>"entrypoint.sh"</code> in the "root" directory of your project to ensure that Postgres is functioning correctly before applying the migrations and launching the Django development server.</p>
<pre><code class="lang-plaintext">#!/bin/sh

if [ "$DATABASE" = "postgres" ]
then
    echo "Waiting for postgres..."

    while ! nc -z "$DB_HOSTNAME " "$DB_PORT"; do
      sleep 0.1
    done

    echo "PostgreSQL started"
fi
#python manage.py collectstatic --no-input
exec "$@"
</code></pre>
<p>Update the file permissions locally:</p>
<p><code>$ chmod +x app/entrypoint.sh</code></p>
<p>Then, update the Dockerfile to copy over the <em>entrypoint.sh</em> file:</p>
<pre><code class="lang-plaintext"># official base image
FROM python:3.10.9-alpine3.17

#set work directory
RUN mkdir /app
WORKDIR /app

#set environment variable
ENV PYTHONDONTWRITEBYCODE 1
ENV PYTHONUNBUFFERED 1

#install pyscopg2 dependencies
RUN apk update &amp;&amp; apk add postgresql-dev gcc python3-dev musl-dev linux-headers

#install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt

#media files
RUN mkdir -p /media
RUN mkdir -p /static

# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh

# copy project
COPY . .

# run entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]
</code></pre>
<p>After that test it out:</p>
<ol>
<li><p>Re-build the images</p>
</li>
<li><p>Run the containers</p>
</li>
<li><p>Try <a target="_blank" href="http://localhost:8000/">http://localhost:8000/</a></p>
</li>
</ol>
<h2 id="heading-gunicorn"><strong>Gunicorn</strong></h2>
<p>To prepare for production environments, we will include Gunicorn, which is a WSGI server that is suitable for production use, in the requirements file.So, Firstof all install Gunicorn and add it to the requirements.txt file.</p>
<p>To continue utilizing Django's internal server for development, you can generate a fresh compose file named docker-compose.prod.yml solely for production purposes.</p>
<pre><code class="lang-plaintext">version: '3.10'

services:
  web:
    build: .
    command: gunicorn personal.wsgi:application --bind 0.0.0.0:8000
    volumes:
      - static_data:/app/static
    ports:
      - "8000:8000"
    restart: always
    env_file:
      - ./.env.prod
    depends_on:
      - db
  db:
    image: postgres:13.0-alpine
    restart: always
    volumes:
      - postgres_data:/var/lib/postgresql/data:rw
    env_file:
      - .env.prod
volumes:
  static_data:
  postgres_data:
</code></pre>
<p>Here, We're running Gunicorn rather than the Django development server.</p>
<p>Now Let's create <code>.env.prod</code> file for environemental variables:</p>
<pre><code class="lang-plaintext">SECRET_KEY=
ALLOWED_HOSTS= localhost 127.0.0.1 [::1]
DEBUG=True

# Database
DB_NAME=testing
DB_USERNAME=postgres
DB_PASSWORD=36050
DB_HOSTNAME=localhost
DB_PORT=5432
</code></pre>
<p>Bring <a target="_blank" href="https://docs.docker.com/compose/reference/down/">down</a> the development containers (and the associated volumes with the <code>-v</code> flag):</p>
<pre><code class="lang-plaintext">$ docker-compose down -v
</code></pre>
<p>Then, build the production images and spin up the containers:</p>
<pre><code class="lang-plaintext">$ docker-compose -f docker-compose.prod.yml up -d --build
</code></pre>
<p>Now, we need to create a production Dockerfile (Dockerfile.prod) and an entrypoint.prod.sh file inside the scripts directory of the project's root. The entrypoint.prod.sh file will serve as the production script file for the entrypoint.</p>
<pre><code class="lang-plaintext">#!/bin/sh

if [ "$DATABASE" = "postgres" ]
then
    echo "Waiting for postgres..."

    while ! nc -z "$DB_HOSTNAME " "$DB_PORT"; do
      sleep 0.1
    done

    echo "PostgreSQL started"
fi
python manage.py collectstatic --no-input
exec "$@"
</code></pre>
<pre><code class="lang-plaintext"># official base image
FROM python:3.10.9-alpine3.17

#set work directory
RUN mkdir /app
WORKDIR /app

#set environment variable
ENV PYTHONDONTWRITEBYCODE 1
ENV PYTHONUNBUFFERED 1

#install pyscopg2 dependencies
RUN apk update &amp;&amp; apk add postgresql-dev gcc python3-dev musl-dev linux-headers

or you can use
#install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt

#media files
RUN mkdir -p /media
RUN mkdir -p /static

# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh

# copy project
COPY . .

# run entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]
</code></pre>
<p>Now, update the compose production file with docker production file:</p>
<pre><code class="lang-plaintext">services:
  web:
    build:
     context: .
     dockerfile: Dockerfile.prod
    command: gunicorn personal.wsgi:application --bind 0.0.0.0:8000
    volumes:
      - static_data:/app/static
    expose:
      - 8000
    restart: always
    env_file:
      - ./.env.prod
    depends_on:
      - db
</code></pre>
<p>Try it out:</p>
<pre><code class="lang-plaintext">$ docker-compose -f docker-compose.prod.yml down -v
$ docker-compose -f docker-compose.prod.yml up -d --build
$ docker-compose -f docker-compose.prod.yml exec app python manage.py migrate --noinput
</code></pre>
<h2 id="heading-ngnix"><strong>Ngnix</strong></h2>
<p>In terms of flexibility, Nginx offers an unparalleled degree of control. By configuring it as a reverse proxy for Gunicorn, you can achieve almost anything. To accomplish this, add the Nginx service to the production docker-compose file.</p>
<pre><code class="lang-plaintext">version: '3.10'

services:
  web:
    build:
      context: .
    command: gunicorn config.wsgi:application --bind 0.0.0.0:8000
    volumes:
      - static_data:/app/static
      - media_data:/app/media
    expose:
      - 8000
    restart: always
    env_file:
      - ./.env
    depends_on:
      - db
  db:
    image: postgres:13.0-alpine
    restart: always
    volumes:
      - postgres_data:/var/lib/postgresql/data:rw
    environment:
      - POSTGRES_USER=${DB_USERNAME}
      - POSTGRES_PASSWORD=${DB_PASSWORD}
      - POSTGRES_DB=${DB_NAME}
  nginx:
    build: ./nginx
    volumes:
      - static_data:/app/static
      - media_data:/app/media
    ports:
      - "8008:80"
    depends_on:
      - web
volumes:
  static_data:
  media_data:
  postgres_data:
</code></pre>
<p>Create the following files and folders:</p>
<pre><code class="lang-plaintext">└── nginx
    ├── Dockerfile
    └── nginx.conf
    └── uwsgi_params
</code></pre>
<p>Add these code inside <strong>Dockerfile</strong>:</p>
<pre><code class="lang-plaintext">FROM nginx:1.21-alpine

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
COPY uwsgi_params /etc/nginx/uwsgi_params
</code></pre>
<p>Create <strong>uwsgi_params</strong> and add</p>
<pre><code class="lang-plaintext">
uwsgi_param  QUERY_STRING       $query_string;
uwsgi_param  REQUEST_METHOD     $request_method;
uwsgi_param  CONTENT_TYPE       $content_type;
uwsgi_param  CONTENT_LENGTH     $content_length;

uwsgi_param  REQUEST_URI        $request_uri;
uwsgi_param  PATH_INFO          $document_uri;
uwsgi_param  DOCUMENT_ROOT      $document_root;
uwsgi_param  SERVER_PROTOCOL    $server_protocol;
uwsgi_param  REQUEST_SCHEME     $scheme;
uwsgi_param  HTTPS              $https if_not_empty;

uwsgi_param  REMOTE_ADDR        $remote_addr;
uwsgi_param  REMOTE_PORT        $remote_port;
uwsgi_param  SERVER_PORT        $server_port;
uwsgi_param  SERVER_NAME        $server_name;
</code></pre>
<p>create <strong>nginx.conf</strong> and add</p>
<pre><code class="lang-plaintext">upstream django_project {
    server web:8000;
}

server {
    listen 80;

    location /static {
        alias /static;
    }

    location /media {
        alias /media;
    }

    location / {
        uwsgi_pass web:8000;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
        include /etc/nginx/uwsgi_params;
    }
}
</code></pre>
<p>for the static and media file, add inside settings.py</p>
<pre><code class="lang-plaintext">#for static files
STATIC_URL = "/static/"
STATIC_ROOT = BASE_DIR / "staticfiles"

#for media files
MEDIA_URL = "/media/"
MEDIA_ROOT = BASE_DIR / "mediafiles"
</code></pre>
<p>For the last time,re-build run and try it out:</p>
<pre><code class="lang-plaintext">$ docker-compose -f docker-compose.prod.yml down -v
$ docker-compose -f docker-compose.prod.yml up -d --build
$ docker-compose -f docker-compose.prod.yml exec web python manage.py migrate --noinput
</code></pre>
<p><strong><em>Summary:</em></strong></p>
<p>So here's the end, We learned each steps to containerize a Django web application along with Postgres for development purposes. Additionally, it illustrated the creation of a Docker Compose file suitable for production environments, which incorporated Gunicorn and Nginx to manage static and media files. This enables local testing of a production setup.</p>
<p>Thank you so much, Bye.</p>
]]></content:encoded></item><item><title><![CDATA[Dockerizing Django With Postgres, NGINX, and Gunicorn (PART-1)]]></title><description><![CDATA[Running a Django application in a production environment requires a server that can handle traffic, ensure stability, and provide scalability. Gunicorn is a widely used and trusted server for running Django applications in such an environment. In thi...]]></description><link>https://susiltiwari.com.np/dockerizing-django-with-postgres-nginx-and-gunicorn-part-1</link><guid isPermaLink="true">https://susiltiwari.com.np/dockerizing-django-with-postgres-nginx-and-gunicorn-part-1</guid><category><![CDATA[Django]]></category><category><![CDATA[Docker]]></category><category><![CDATA[nginx]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Sushil Tiwari]]></dc:creator><pubDate>Sun, 23 Apr 2023 04:40:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1682185656354/4d2df3cb-7ebf-46ae-b3d0-14c6c1fef62f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Running a Django application in a production environment requires a server that can handle traffic, ensure stability, and provide scalability. Gunicorn is a widely used and trusted server for running Django applications in such an environment. In this article, we'll be deploying a Django application with docker, postgres, gunicorn and nginx configurations. So, Let's start and understand everything in more detail step by step</p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>Before we begin, make sure that you have the following installed on your local machine:</p>
<ul>
<li><p>Python 3.7 or higher</p>
</li>
<li><p>Django</p>
</li>
<li><p>Django Rest Framework</p>
</li>
<li><p>Docker and Docker-compose</p>
</li>
</ul>
<p>Make sure to set up your Django project and run it locally. After everything is ready, let's make your Django application ready for deployment.</p>
<h1 id="heading-create-a-dockerfile">Create a Dockerfile</h1>
<p>The Dockerfile is a script that contains instructions on how to build a Docker image for your application. Create a new file named <code>Dockerfile</code> in the root directory of your Django project with the following content:</p>
<pre><code class="lang-plaintext"># official base image
FROM python:3.10.9-alpine3.17

#set work directory
RUN mkdir /app
WORKDIR /app

#set environment variable
ENV PYTHONDONTWRITEBYCODE 1
ENV PYTHONUNBUFFERED 1

#install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt

# copy project
COPY . .
</code></pre>
<p>Let's understand the code Line by Line,</p>
<p>So basically, The first line specifies the official base image to use, which is the Python 3.10.9-alpine3.17 image. This image is a lightweight version of Python that runs on Alpine Linux, which is a small, secure, and efficient Linux distribution.</p>
<p>Next, the working directory is set to <code>/app</code>, which is where the application code will reside inside the container.</p>
<p>Two environment variables are set to ensure that the Python application runs correctly in the container. The <code>PYTHONDONTWRITEBYCODE</code> environment variable is set to 1, which instructs Python not to write bytecode files to disk. The <code>PYTHONUNBUFFERED</code> environment variable is also set to 1, which disables the buffering of standard output and standard error streams by Python.</p>
<p>The dependencies for the Django application are installed via pip. First, pip is upgraded to the latest version, and then the contents of <code>requirements.txt</code> are copied into the container and installed using pip.</p>
<p>Finally, the entire contents of the application's directory are copied into the Docker image. This includes the Django project files, any static files, templates, or media files, as well as any other necessary files for the application to run.</p>
<p>This Dockerfile sets up the basic environment required to run a Django application in a containerized environment and is a good starting point for building more complex Docker images for Django applications.</p>
<p>Let's go on more,</p>
<h1 id="heading-docker-compose">Docker Compose</h1>
<p>Docker Compose is a tool that allows you to define and run multi-container Docker applications. With Compose, you can define the services that make up your application, how they should be configured, and how they should interact with each other.</p>
<p>A Docker Compose file is a YAML file that defines the services, networks, and volumes required for your application. In a typical Django application, you might have several services, such as the Django web server, a database service like Postgres, and a web server like Nginx.</p>
<p>In a Docker Compose file, you would define each of these services as a separate container, along with their configuration options. You can also define networks and volumes that are shared between the containers.</p>
<p>So Let's Make a docker-compose.yml file to the root project</p>
<pre><code class="lang-plaintext">version: '3.10'

services:
    app:
        build: .
        command: python manage.py runserver 0.0.0.0:8000
        volumes:
            - static_data:/app/static
        ports:
            - "8000:8000"
        env_file:
            - ./.env
</code></pre>
<p>Let's understand the code line by line,</p>
<p>The <code>command</code> instruction specifies the command to run when the container starts up. In this case, it's running the <code>runserver</code> command for Django, which starts a development server that listens on all available network interfaces (<code>0.0.0.0</code>) on port <code>8000</code>.</p>
<p>The <code>volumes</code> instruction creates a named volume named <code>static_data</code> that will be mounted inside the container at the path <code>/app/static</code>. This allows the container to access static files that are generated by Django and stored outside of the container.</p>
<p>The <code>ports</code> instruction maps port <code>8000</code> on the host machine to port <code>8000</code> inside the container, so that the Django application can be accessed by visiting <a target="_blank" href="http://localhost:8000"><code>http://localhost:8000</code></a> in a web browser.</p>
<p>Finally, the <code>env_file</code> instruction specifies a path to a file containing environment variables that should be loaded into the container. In this case, the file is located at <code>./.env</code>.</p>
<p>Now, create a <code>.env</code> file in the project root to store environment variables for development:</p>
<pre><code class="lang-plaintext">SECRET_KEY=
ALLOWED_HOSTS= localhost 127.0.0.1 [::1]
DEBUG=True
</code></pre>
<p>You need to change the SECRET_KEY, ALLOWED_HOST and DEBUG in your project Setting file. Make sure to add <code>import os</code> in your setting file</p>
<pre><code class="lang-plaintext">import os
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = config('SECRET_KEY')

# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = config('DEBUG', default=False, cast=bool)

ALLOWED_HOSTS = config('ALLOWED_HOSTS', cast=Csv())
</code></pre>
<p>Now, Build the image:</p>
<pre><code class="lang-plaintext">$ docker-compose build
</code></pre>
<p>Once the image is built, run the container:</p>
<pre><code class="lang-plaintext">$ docker-compose up -d
</code></pre>
<p>Now Go to <a target="_blank" href="http://localhost:8000/">http://localhost:8000/</a> to see the project running.</p>
<p><strong><em>Summary:</em></strong></p>
<p>Up to now We built a Docker image of our application and run into a container that is running successfully. In the next part, We'll see how to integrate PostgreSQL with your application and dockerize further until then have a great time. Bye!</p>
<p>Part-2: <a target="_blank" href="https://hashnode.com/post/clh4xyzxp000409mjf8ed4qry">Dockerizing Django With Postgres, NGINX, and Gunicorn (PART-2)</a></p>
]]></content:encoded></item><item><title><![CDATA[Install PostgreSQL and pgadmin4 on your Ubuntu Easily]]></title><description><![CDATA[So, Recently I had to rebuild my operating system, and I was having trouble setting up my development environment, which includes PostgreSQL, on the new installation. As you may know, I work a lot with PostgreSQL, and I needed to install it before I ...]]></description><link>https://susiltiwari.com.np/install-postgresql-and-pgadmin4-on-your-ubuntu-easily</link><guid isPermaLink="true">https://susiltiwari.com.np/install-postgresql-and-pgadmin4-on-your-ubuntu-easily</guid><category><![CDATA[PostgreSQL]]></category><category><![CDATA[postgres]]></category><category><![CDATA[pgAdmin]]></category><category><![CDATA[Databases]]></category><category><![CDATA[Ubuntu]]></category><dc:creator><![CDATA[Sushil Tiwari]]></dc:creator><pubDate>Sat, 07 Jan 2023 14:33:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673091787504/42b5ee85-4564-4838-b6c3-2ecac861faba.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>So, Recently I had to rebuild my operating system, and I was having trouble setting up my development environment, which includes PostgreSQL, on the new installation. As you may know, I work a lot with PostgreSQL, and I needed to install it before I could do anything.</p>
<p>Luckily, I was able to set everything up correctly and get my Django applications up and running in no time.</p>
<p>If you are unfamiliar with <strong>PostgreSQL,</strong> It is a high-performance, enterprise-grade, open-source relational database system. <strong>SQL</strong> (relational) and <strong>JSON</strong> (non-relational) querying are both supported by PostgreSQL.</p>
<p><strong>PostgreSQL</strong> is a very reliable database that has been developed by the open-source community for over 20 years.</p>
<p>Many online apps, as well as mobile and analytics applications, use <strong>PostgreSQL</strong> as their primary database</p>
<p><img src="https://miro.medium.com/max/875/1*PY24xlr4TpOkXW04HUoqrQ.jpeg" alt /></p>
<p>Here are some steps you may follow to simply install PostgreSQL on your PC if you wish to utilize it.</p>
<h2 id="heading-installation"><strong>Installation</strong></h2>
<ol>
<li><p>First, update the package manager's cache with the following command</p>
<pre><code class="lang-plaintext"> $ sudo apt-get update
</code></pre>
</li>
<li><p>Install <strong>PostgreSQL</strong> with the following command</p>
<pre><code class="lang-plaintext"> sudo apt-get install postgresql postgresql-contrib
</code></pre>
</li>
<li><p><strong>Enable</strong> and start <strong>Postgresql</strong></p>
<pre><code class="lang-plaintext"> systemctl enable postgresql
 systemctl start postgresql
</code></pre>
<p> Once the installation is complete, we can check that the service is running by using the following command</p>
<pre><code class="lang-plaintext"> sudo systemctl status postgresql
</code></pre>
</li>
<li><p>By default, <strong>PostgreSQL</strong> creates a user named "<strong>Postgres</strong>" during the installation process. We can switch to this <strong>user</strong> by using the following command:</p>
<pre><code class="lang-plaintext"> sudo -u postgres psql
</code></pre>
</li>
</ol>
<h2 id="heading-using-postgresql-roles-and-databases"><strong>Using PostgreSQL Roles and Databases</strong></h2>
<p>"Roles" are a tool used by Postgres for authentication and authorization. The only user who can connect to the server by default is the Postgres user, which is created by Postgres. To create our superuser role to connect to the server.</p>
<p><code>sudo -u Postgres createuser --superuser $USER</code></p>
<p>After that, since Postgres by default expects a database with that $USER login name, we must build one.</p>
<p><code>$ sudo -u Postgres createdb $USER</code></p>
<p>We can create a new database and a new user with the following commands:</p>
<pre><code class="lang-plaintext">sudo su - postgres
createdb db_name
echo "CREATE ROLE db_user WITH PASSWORD 'DUDTL39YHa91x4Y';" | psql
echo "ALTER ROLE db_user WITH LOGIN;" | psql
echo "GRANT ALL PRIVILEGES ON DATABASE "db_name" to db_user;" | psql
exit
</code></pre>
<p>To enter inside Postgres, use this command:</p>
<p><code>psql -U postgres -h localhost</code></p>
<p>Some useful commands:</p>
<p><strong>List database:</strong> <code>\l</code></p>
<p><strong>List users:</strong> <code>\du</code></p>
<p>To <strong>exit</strong> the <strong>PostgreSQL</strong> shell<strong>,</strong> use the following command:</p>
<p><code>\q</code></p>
<p>To <strong>stop</strong> the <strong>PostgreSQL</strong> service, use the following command</p>
<p><code>sudo systemctl stop postgresql</code></p>
<h2 id="heading-install-pgadmin4"><strong>Install pgadmin4</strong></h2>
<p>pgAdmin4 isn't accessible within the Ubuntu stores. We ought to introduce it from the pgAdmin4 Well-suited store. Begin by setting up the store. Include the open key for the store and make the store arrangement record.</p>
<pre><code class="lang-plaintext">$ curl https://www.pgadmin.org/static/packages_pgadmin_org.pub | sudo apt-key add
$ sudo sh -c 'echo "deb https://ftp.postgresql.org/pub/pgadmin/pgadmin4/apt/$(lsb_release -cs) pgadmin4 main" &gt; /etc/apt/sources.list.d/pgadmin4.list &amp;&amp; apt update'
</code></pre>
<p>Then install <strong>pgAdmin4</strong>,</p>
<pre><code class="lang-plaintext">$sudo apt install pgadmin4
</code></pre>
<p>Boom! It's Done.</p>
<p>That’s all! For more data, see the PostgreSQL documentation and pgAdmin 4 documentation. Keep in mind to share your considerations with us using the comment segment underneath.</p>
]]></content:encoded></item><item><title><![CDATA[Install Tailwind CSS in Nuxt.js 3]]></title><description><![CDATA[Hey Guys, Today We'll look at installing and configuring Tailwind CSS in Nuxt.js 3. Server-side rendering (SSR) and static site generation work well with Nuxtjs (SSG). For speedier performance and a better developer experience, Nuxt 3 has been re-arc...]]></description><link>https://susiltiwari.com.np/install-tailwind-css-in-nuxtjs-3</link><guid isPermaLink="true">https://susiltiwari.com.np/install-tailwind-css-in-nuxtjs-3</guid><category><![CDATA[Nuxt]]></category><category><![CDATA[Nuxt.js]]></category><category><![CDATA[Tailwind CSS]]></category><category><![CDATA[nuxt3]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Sushil Tiwari]]></dc:creator><pubDate>Mon, 26 Dec 2022 18:25:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672069795585/4fb810ff-0b51-43a6-8519-0a3268db9c5b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p><strong>Hey Guys</strong>, Today We'll look at installing and configuring <strong>Tailwind CSS</strong> in <strong>Nuxt.js 3</strong>. Server-side rendering (SSR) and static site generation work well with Nuxtjs (SSG). For speedier performance and a better developer experience, <strong>Nuxt 3</strong> has been re-architected with a smaller core. A utility-first CSS framework is <strong>Tailwind CSS</strong>. The combination of Nuxt js with tailwind CSS is ideal.</p>
<p>Before getting started, make sure you have the following installed on your machine:</p>
<ul>
<li><p><strong>Node.js and npm (or yarn)</strong></p>
</li>
<li><p><strong>Nuxt.js 3</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-create-a-new-project"><strong>Create a New Project</strong></h2>
<ol>
<li><p>Open a terminal and navigate to the directory where you want to create your project.</p>
</li>
<li><p>Open your project folder in Visual Studio Code</p>
</li>
<li><p>Run the following command to create a new Nuxt.js project</p>
</li>
</ol>
<pre><code class="lang-javascript">npx nuxi init &lt;project-name&gt;
</code></pre>
<p>Replace <code>&lt;project-name&gt;</code> with the desired name for your project. It looks like this in Visual Studio Code. I am running on the same directory <code>npx nuxi init .</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672072469013/37c0a762-fae2-4b7a-9783-5cc9cad96a44.png" alt="install nuxt.js" class="image--center mx-auto" /></p>
<p>Now, Go to your project folder using this command.</p>
<p><code>cd &lt;project-name&gt;</code></p>
<ol>
<li><p>Install the dependencies</p>
<p> <code>`yarn install` </code> or <code>`npm install` </code></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672074178541/09bc7e39-b1ca-4315-96cd-1f37b4c5839f.png" alt class="image--center mx-auto" /></p>
<p>After installing the prerequisites for our application, we will set up Tailwind CSS and all of the other components it needs to function properly.</p>
<hr />
<h3 id="heading-installing-tailwind-dependencies"><strong>Installing Tailwind dependencies</strong></h3>
<ol>
<li><p>Run the following command in your project directory:</p>
<p> <code>yarn add -D tailwindcss@latest postcss@latest autoprefixer@latest</code></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672074581576/71d465ce-1c6c-47ec-ace2-308bc6ecb39e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-generating-a-tailwind-config"><strong>Generating a Tailwind config</strong></h3>
<ol>
<li><p>Generate a Tailwind CSS configuration file by running</p>
<p> <code>npx tailwindcss init -p</code></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672074955946/415b9245-8ae6-4d55-b1d4-e8011b516303.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-configuring-tailwind-config-file"><strong>Configuring Tailwind Config file</strong></h3>
<ol>
<li><p>You'll need to configure Tailwind and let it know which files to purge. Open tailwind.config.js and add the following:</p>
<pre><code class="lang-javascript"> <span class="hljs-comment">/** <span class="hljs-doctag">@type <span class="hljs-type">{import('tailwindcss').Config}</span> </span>*/</span>
 <span class="hljs-built_in">module</span>.exports = {
   <span class="hljs-attr">content</span>: [
     <span class="hljs-string">"./components/**/*.{js,vue,ts}"</span>,
     <span class="hljs-string">"./layouts/**/*.vue"</span>,
     <span class="hljs-string">"./pages/**/*.vue"</span>,
     <span class="hljs-string">"./plugins/**/*.{js,ts}"</span>,
     <span class="hljs-string">"./nuxt.config.{js,ts}"</span>,
     <span class="hljs-string">"./app.vue"</span>,
   ],
   <span class="hljs-attr">theme</span>: {
     <span class="hljs-attr">extend</span>: {},
   },
   <span class="hljs-attr">plugins</span>: [],
 };
</code></pre>
</li>
</ol>
<h3 id="heading-adding-tailwind-to-project-styles"><strong>Adding Tailwind to project styles</strong></h3>
<ol>
<li><p>You can customize the theme, variants, and plugins by adding properties to the corresponding objects.</p>
<p> Next, create a <code>tailwind.css</code> file in the <code>assets</code> directory:</p>
</li>
</ol>
<pre><code class="lang-css"><span class="hljs-keyword">@tailwind</span> base;
<span class="hljs-keyword">@tailwind</span> components;
<span class="hljs-keyword">@tailwind</span> utilities;
</code></pre>
<ol>
<li>Now, we move to the <code>nuxt.config.ts</code> file to add some configuration related to <strong>Tailwind CSS</strong>.</li>
</ol>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> defineNuxtConfig({
  css: [<span class="hljs-string">'~/assets/tailwind.css'</span>],
  postcss: {
    plugins: {
      tailwindcss: {},
      autoprefixer: {},
    },
  },
});
</code></pre>
<h3 id="heading-testing-out-tailwind-css"><strong>Testing out Tailwind CSS</strong></h3>
<p>Open the <code>app. vue</code> file and Replace the <code>&lt;NuxtWelcome&gt;</code> inside the div with</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">class</span>=<span class="hljs-string">"flex h-20 items-center m-auto bg-red-500 justify-center text-3xl text-white font-medium "</span>&gt;</span>
 Hello world
<span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
</code></pre>
<p>Run your build process with <code>yarn dev</code> or <code>npm run dev</code></p>
<p>Now Goto <code>http://localhost:3000/</code> in a web browser.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672077958433/0e157829-4f25-4364-b480-2325a334aa6b.png" alt="Output" class="image--center mx-auto" /></p>
<p><strong>BOOM!</strong> We did it.</p>
<hr />
<p><strong><em>Finally</em></strong>, We configure <strong>Tailwind CSS</strong> in <strong>Nuxt.js 3.</strong></p>
<p><strong>Thank you</strong> for being here <strong>Bye</strong>.👋</p>
]]></content:encoded></item></channel></rss>