<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Diving DevOps]]></title><description><![CDATA[I'm diving DevOps topics and sharing my expertise.]]></description><link>https://tymik.me</link><generator>RSS for Node</generator><lastBuildDate>Tue, 21 Apr 2026 20:46:47 GMT</lastBuildDate><atom:link href="https://tymik.me/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Terraform State: Why I had to break up with my monolith (and how you can do it too!)]]></title><description><![CDATA[Let’s talk about something every DevOps engineer eventually faces - the moment you realize your Terraform state file has become a monster.You know, that creeping feeling when terraform plan takes longer than your coffee to brew, and running terraform...]]></description><link>https://tymik.me/terraform-state-why-i-had-to-break-up-with-my-monolith-and-how-you-can-do-it-too</link><guid isPermaLink="true">https://tymik.me/terraform-state-why-i-had-to-break-up-with-my-monolith-and-how-you-can-do-it-too</guid><category><![CDATA[Terraform]]></category><category><![CDATA[State Management ]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[#IaC]]></category><dc:creator><![CDATA[Jan Tymiński]]></dc:creator><pubDate>Tue, 01 Jul 2025 17:03:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752770970430/de35c85f-4344-40aa-a3c0-81c6198d860c.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let’s talk about something every DevOps engineer eventually faces - the moment you realize your Terraform state file has become a monster.<br />You know, that creeping feeling when <code>terraform plan</code> takes longer than your coffee to brew, and running <code>terraform apply</code> feels like launching a rocket - except you’re not sure where it’ll land.</p>
<p>I’ve been there. Early on, I loved the simplicity of a single state file.<br />One place to rule them all! But as my infrastructure grew, so did the headaches.<br />Here’s why I (and probably you, too) needed to split up that Terraform state - whether you’re still running everything in one file, or you’ve already split things up and are wondering if you need to go further.</p>
<h2 id="heading-the-monolith-why-one-state-file-feels-good-until-it-doesnt">The monolith: Why one state file feels good… Until it doesn’t</h2>
<p>At first, having everything in one state file feels like freedom.<br />All your resources, all your environments, one command to manage them all.<br />But then:</p>
<ul>
<li><p><strong>Performance Tanks:</strong> Suddenly, <code>terraform plan</code> is slow.<br />  So slow, you start checking your WiFi.<br />  But it’s not your connection - it’s the state file, bloated with hundreds (or thousands) of resources.</p>
</li>
<li><p><strong>The “Oh Crap” Moment:</strong> Ever accidentally deleted a production resource while working on dev?<br />  With one state file, the blast radius of a mistake is the whole infrastructure.<br />  Definitely not fun.</p>
</li>
<li><p><strong>Team Gridlock:</strong> Terraform locks the state file during changes.<br />  So if Victoria is updating a subnet and Leon is tweaking a security group, someone’s waiting.<br />  And waiting.<br />  And waiting…<br />  And… You get the point ;)</p>
</li>
<li><p><strong>Secrets Everywhere:</strong> That one state file?<br />  It’s got all your secrets.<br />  Anyone who can read it can see everything.<br />  Not ideal when you want to keep prod secrets, well, secret.</p>
</li>
<li><p><strong>Environments Collide:</strong> Mixing dev, staging, and prod in one state is a recipe for disaster.<br />  One wrong move, and your “test” change hits prod.</p>
</li>
</ul>
<p>So, what do you do?<br />You start splitting.</p>
<h2 id="heading-already-split-heres-why-you-might-need-to-split-again">Already split? Here’s why you might need to split again</h2>
<p>Maybe you’ve already broken things up - a state file for networking, another for compute, maybe one per environment.<br />But the pain isn’t gone:</p>
<ul>
<li><p><strong>Still Too Big:</strong> Even after splitting, some state files keep growing.<br />  That “networking” state now covers VPCs, subnets, gateways, peering, and more.<br />  Time to split again - maybe VPC in one, subnets in another.</p>
</li>
<li><p><strong>Team Ownership:</strong> As teams grow, so do ownership boundaries.<br />  The DB team doesn’t want to wait for the network team to finish their changes.<br />  Or maybe you have split teams and responsibilities?<br />  Give them their own state, let them move fast.</p>
</li>
<li><p><strong>Parallel Deployments:</strong> CI/CD pipelines are happiest when they can run in parallel.<br />  Multiple state files mean multiple pipelines, all running at once, no waiting.</p>
</li>
<li><p><strong>Refactoring Time:</strong> Infrastructure evolves.<br />  Maybe you want to turn that old-school monolith into shiny new modules.<br />  Moving resources into their own state files makes this possible (and safe).</p>
</li>
<li><p><strong>Compliance &amp; Auditing:</strong> Sometimes, it’s not about what you want - it’s what the auditors want.<br />  Need to separate PCI or GDPR workloads?<br />  Split the state.</p>
</li>
<li><p><strong>Complex Dependencies:</strong> When you’re referencing resources across modules, things get messy.<br />  Splitting state helps keep dependencies clear and manageable.</p>
</li>
</ul>
<h2 id="heading-my-rule-of-thumb">My rule of thumb</h2>
<p>Start simple.<br />But as soon as you feel the pain - slow plans, team friction, scary blast radius - split.<br />And don’t be afraid to split again as your infra grows.<br />It’s not a sign of bad design - it’s a sign that you’re scaling.<br />In Poland we say: “Kto to Panu tak spierdolił?”<br />Don’t think like that…<br />Something that may seem as poorly designed, worked in past and delivered a value - it just doesn’t necessarily do anymore and it’s time for a refactor.</p>
<p>If you’re using Terragrunt (like I sometimes do), it makes this even easier.<br />Each module, each environment, its own state.<br />Clean, fast, and safe.</p>
<h2 id="heading-tldr">TL;DR</h2>
<p>Splitting Terraform state files isn’t just a best practice - it’s a survival tactic as your infrastructure grows.<br />It keeps your team moving, your secrets safe, and your weekends free from “terraform panic” moments.</p>
<p>Been there, done that, got the (split) state files to prove it.</p>
<h2 id="heading-you-convinced-me-so-how-do-i-split-a-terraform-state">You convinced me! So how do I split a Terraform state?</h2>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ol>
<li><p>Ensure your S3 bucket for holding states has versioning enabled.<br /> You don’t want to find out you have no backup in case of messing the state up.</p>
</li>
<li><p>Have a state locking mechanism to avoid external changes in the middle of process.<br /> If you use <a target="_blank" href="https://www.runatlantis.io/">Atlantis</a>, you’re probably on the safe side.<br /> If you orchestrate Terraform with different CI/CD solution, please ensure it will lock the states.</p>
</li>
<li><p>Be aware that Terraform state may contain sensitive information.<br /> You will download this state locally.<br /> <strong>Be very careful so you don’t commit local state file to the repository!</strong><br /> Ensure you remove all the local state files after you finish.</p>
</li>
</ol>
<h3 id="heading-steps-to-move-resources-between-terraform-states">Steps to move resources between Terraform states</h3>
<p>Before you begin, a short clarification note - I use <strong>current state</strong> to refer to the state you already have before split and that will most likely still exist, just with less resources; and I use <strong>new state</strong> to refer to the state you create and move resources to it.<br />In case something is not clear, leave me a comment and I will improve the writings.</p>
<ol>
<li><p>Create a new state directory in your configuration, with <code>terraform.tf</code> <strong>pointing to the new state location</strong> in your S3 bucket for states.<br /> <strong>Be sure you don’t change state path in current state!</strong></p>
</li>
<li><p>[OPTIONAL] If you use Atlantis, add the new directory to the <code>atlantis.yml</code> file.</p>
</li>
<li><p>Open a PR that will lock both states and ensure nobody will interact with these states during migration.</p>
<ol>
<li><p>if you use Atlantis, it will create locks. Just make it clear in the PR, that nobody should take the lock down (e.g. add <code>DO NOT TAKE LOCK</code> to the PR title if you don’t have any mechanism that would guarantee nobody takes the lock).</p>
</li>
<li><p>If you don’t use Atlantis, ensure how your CI/CD solution can lock the state you work on.</p>
</li>
<li><p>If you don’t use any CI/CD for Terraform execution, then clearly communicate with all your Terraform contributors, that you are splitting this particular state and they cannot work with it unless you finish the split.</p>
</li>
</ol>
</li>
<li><p>Move Terraform resources, modules and outputs to the new state, as needed.</p>
</li>
<li><p>Create a <code>terraform_remote_state</code> data source pointing to the new state - the same path you used in new <code>terraform.tf</code></p>
</li>
<li><p>In the current state refer to the outputs of this new <code>terraform_remote_state</code>, for the resources that you moved</p>
</li>
<li><p>Pull both states locally with <code>terraform state pull &gt; state.backup</code>, respectively in current and new state directories.<br /> <strong>Remember: Terraform states may contain highly sensitive information - work with caution!</strong></p>
</li>
<li><p>In both directories make copies of pulled states with <code>cp state.backup modified.state</code></p>
</li>
<li><p>Now you can move your resources between states with:</p>
<pre><code class="lang-bash"> For regular resources:
 terraform state mv -state /path/to/current/state/modified.state -state-out /path/to/new/state/networking/modified.state aws_vpc.my_vpc aws_vpc.my_vpc

 For modules:
 terraform state mv -state /path/to/current/state/modified.state -state-out /path/to/new/state/networking/modified.state module.my_module module.my_module
</code></pre>
<p> <strong>Moving modules is actually awesome here, as you can move entire module, without moving it’s internal resources one by one - this simplifies a lot!</strong></p>
</li>
<li><p>Push both states, in both directories, with <code>terraform state push modified.state</code></p>
</li>
<li><p>Plan the new state - you should see no changes to resources if everything went well - but you will see new outputs to be created.<br />If not, get back to 9. and move what you missed.</p>
</li>
<li><p>Apply the new state so it has the outputs - they are referred in the current state.</p>
</li>
<li><p>Plan the current state - you should see no changes if everything went well so far.<br />If not, get back to 9. and move what you missed.</p>
</li>
<li><p>Apply the current state.</p>
</li>
<li><p>At this point everything is good, so merge your PR.</p>
</li>
<li><p>Now the process is completed.<br />Moving resources produced some <code>modified.state.*.backup</code> in both directories - you can now remove them, as well as <code>modified.state</code> and <code>state.backup</code> files.</p>
</li>
<li><p>Search your repositories for other states references to the current state.<br />On GitHub you can search through your entire Organization.</p>
<ol>
<li><p>If any state refers to the moved resources, add <code>terraform_remote_state</code> data source pointing to the new state, as in 5.</p>
</li>
<li><p>Point relevant resources to that new <code>terraform_remote_state</code> data source</p>
</li>
<li><p>Plan that state - you should see no changes.<br /> If not - verify previous steps, if you haven’t missed or mistaken anything.</p>
</li>
<li><p>Apply that state.</p>
</li>
<li><p>Repeat for every additional state that referred the state that was split.</p>
</li>
</ol>
</li>
<li><p>The process is over, you have cleaned up the state 💪</p>
</li>
</ol>
<p>If you want to dive deeper, or need help with your own Terraform breakup story, <a target="_blank" href="https://www.linkedin.com/in/jantyminski/">hit me up</a> - always happy to talk about infra (or scuba stories 🤿).</p>
<p>You can also hire me as a consultant if you feel you need support - just <a target="_blank" href="https://www.linkedin.com/in/jantyminski/">drop me a message</a>!</p>
]]></content:encoded></item><item><title><![CDATA[Account Name and Account Alias in AWS are not the same thing!]]></title><description><![CDATA[Did you know that?
I was surprised when I found out…
I was about to implement an Organization wide solution with AWS Control Tower Account Factory for Terraform (AFT) that, in my case, is using Account Name for further resources creation.As I don’t r...]]></description><link>https://tymik.me/account-name-and-account-alias-in-aws-are-not-the-same-thing</link><guid isPermaLink="true">https://tymik.me/account-name-and-account-alias-in-aws-are-not-the-same-thing</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS Organizations]]></category><category><![CDATA[AWS Account Management]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[IAM]]></category><dc:creator><![CDATA[Jan Tymiński]]></dc:creator><pubDate>Mon, 10 Mar 2025 16:09:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1741183413030/60de01f4-fd1d-4e77-94c2-0b0f79ebe519.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Did you know that?</p>
<p>I was surprised when I found out…</p>
<p>I was about to implement an Organization wide solution with <a target="_blank" href="https://docs.aws.amazon.com/controltower/latest/userguide/aft-overview.html"><strong>AWS Control Tower Account Factory for Terraform (AFT)</strong></a> that, in my case, is using <strong>Account Name</strong> for further resources creation.<br />As I don’t read <strong>Account Aliases</strong> often with Terraform, I googled and the first result I got was <a target="_blank" href="https://stackoverflow.com/questions/72175065/how-to-get-the-aws-account-name-using-terraform">this article from StackOverflow</a>.<br />I became a bit worried about how it could affect my solution, but I decided to proceed.<br />And - unfortunately - I confirmed this is true.</p>
<h2 id="heading-account-name-vs-account-alias">Account Name vs. Account Alias</h2>
<p>I want to briefly describe how <strong>Account Name</strong> and <strong>Account Alias</strong> differ from each other.</p>
<p>As they are different things, they don’t have to be equal and it may require proper handling depending on the situation.</p>
<h3 id="heading-account-name">Account Name</h3>
<p>The <strong>Account Name</strong> in AWS is a name that is used by <strong>AWS Organizations</strong> and it is organized within the scope of AWS Organizations.</p>
<p>The details regarding AWS Organizations are available only to the <strong>Root Account</strong> of the AWS Organization, therefore reading it with the <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/organizations_organization.html"><code>aws_organizations_account</code> data source</a> requires configuring Terraform to access that account first to read the details.<br />This is not straightforward, as it requires reading two accounts at once - technically possible with separate AWS providers to connect to different accounts, but this is not necessarily an optimal approach and may not be possible in some organizations due to additional constraints.<br />It is also possible to overcome this by storing the <strong>Account Name</strong> in <a target="_blank" href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html"><strong>AWS SSM Parameter Store</strong></a> for each account in the Organization<strong>,</strong> but we would be already building a solution that requires maintenance, so it may not be suitable option.</p>
<p>Because <strong>Account Name</strong> exists in terms of <strong>AWS Organization</strong>, it doesn’t have to be unique across the whole AWS - it only needs to be unique in your Organization.</p>
<p>Contradictory to the <strong>Account Alias</strong>, <strong>Account Name</strong> has to be set, <a target="_blank" href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/organizations/client/create_account.html#:~:text=AccountName%20\(string,%5BREQUIRED%5D">it is required</a>.</p>
<p>You can read more about accounts in AWS Organizations in <a target="_blank" href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs-manage_accounts_members.html"><strong>Managing member accounts with AWS Organizations User Guide</strong></a>.</p>
<h3 id="heading-account-alias">Account Alias</h3>
<p>The <strong>Account Alias</strong> is a different thing.</p>
<p>An <strong>AWS Account</strong> doesn’t have to be a part of an <strong>AWS Organization</strong> - so the <strong>Account Name</strong> for this account may not exist.<br />Yet every AWS Account can have the <a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/console-account-alias.html"><strong>Account Alias</strong></a><strong>.</strong></p>
<p>The <strong>Account Alias</strong> is used to identify a particular <strong>AWS Account</strong>.<br /><strong>It is optional, so it doesn’t have to be set for an account, but if it is, it has to be unique.</strong><br />The <strong>Account Alias</strong> is used to uniquely identify the account and it may be used instead of <strong>Account ID</strong> to log in to the <strong>AWS Console</strong>.<br />The <strong>Account Alias</strong> is easy to remember and many organizations and individuals use Aliases for simplification.</p>
<p>The <strong>Account Alias</strong> can also be <strong>deleted</strong></p>
<h2 id="heading-my-scenario">My scenario</h2>
<p>In my scenario <strong>Account Name</strong> is different from <strong>Account Alias</strong> for at least a subset of accounts.<br />Therefore I could not rely on data source below without additional modifications:</p>
<pre><code class="lang-bash">data <span class="hljs-string">"aws_iam_account_alias"</span> <span class="hljs-string">"current"</span> {}
</code></pre>
<p>Luckily I could resolve this, as in my case there is a pattern, so I could manipulate the <code>data.aws_iam_account_alias.current.account_alias</code> value at later step to achieve my goal.</p>
<p>For my scenario it was creating buckets following a certain pattern for logs of one of the landing zone components and I needed to have a unification across the whole Organization.<br />The optimal solution would be to use <code>aws_s3_bucket</code> data source to read the bucket from each account, when configuring logs that should be pointed to there - but <code>aws_s3_bucket</code> data source doesn’t support wildcards at the moment, full bucket name is required - so this is not a solution for me, it doesn’t simplify anything.</p>
<p>There is an <a target="_blank" href="https://github.com/hashicorp/terraform-provider-aws/issues/15544">open issue</a> for <code>aws_s3_bucket</code> data source to enable retrieving a list of buckets matching a particular filter, which would work in my scenario really well, but the issue is open since October 2020 and doesn’t seem to be prioritized.<br />And this is especially frustrating that <a target="_blank" href="https://github.com/hashicorp/terraform-provider-aws/pull/25895">there was a PR with this feature</a> submitted already!</p>
<p>Luckily, the S3 ARN doesn’t contain any unique IDs, it is in form of <code>arn:aws:s3:::${BUCKET_NAME}</code> and my buckets didn’t require any random ID to be unique, so it was easy to just manipulate the <strong>Account Alias</strong> accordingly to achieve the final goal.</p>
<p>And I am writing all that because as it was a case for me, it may be as well a case for you!<br />I work in IT field long enough to see that people follow similar patterns although they have never met each other - some details may differ, but there still can be similar patterns used - and with these patterns, you might fail at some point into this issue, just as I did.</p>
]]></content:encoded></item><item><title><![CDATA[AWS EKS - Error: You must be logged in to the server (the server has asked for the client to provide credentials)]]></title><description><![CDATA[Today I was working on a PoC to try out the new Amazon EKS Auto Mode feature announced on re:Invent 2024.I wanted to make a very quick PoC, so I took the least effort approach to set up my cluster.
When I wanted to start deploying the app proposed by...]]></description><link>https://tymik.me/aws-eks-error-you-must-be-logged-in-to-the-server-the-server-has-asked-for-the-client-to-provide-credentials</link><guid isPermaLink="true">https://tymik.me/aws-eks-error-you-must-be-logged-in-to-the-server-the-server-has-asked-for-the-client-to-provide-credentials</guid><category><![CDATA[AWS]]></category><category><![CDATA[EKS]]></category><category><![CDATA[kubectl]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Jan Tymiński]]></dc:creator><pubDate>Mon, 30 Dec 2024 12:22:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735559846147/284b5a88-da3e-4427-b6da-96473838862c.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today I was working on a PoC to try out the <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-eks-auto-mode/">new Amazon EKS Auto Mode feature announced on re:Invent 2024</a>.<br />I wanted to make a very quick PoC, so I took the least effort approach to set up my cluster.</p>
<p>When I wanted to start deploying the app proposed by AWS in their article about this feature, I was hit by the following error when running any <code>kubectl</code> command:</p>
<pre><code class="lang-bash">E1230 11:20:11.743615    4991 memcache.go:265] <span class="hljs-string">"Unhandled Error"</span> err=<span class="hljs-string">"couldn't get current server API group list: the server has asked for the client to provide credentials"</span>
E1230 11:20:12.438833    4991 memcache.go:265] <span class="hljs-string">"Unhandled Error"</span> err=<span class="hljs-string">"couldn't get current server API group list: the server has asked for the client to provide credentials"</span>
E1230 11:20:13.117818    4991 memcache.go:265] <span class="hljs-string">"Unhandled Error"</span> err=<span class="hljs-string">"couldn't get current server API group list: the server has asked for the client to provide credentials"</span>
E1230 11:20:13.804336    4991 memcache.go:265] <span class="hljs-string">"Unhandled Error"</span> err=<span class="hljs-string">"couldn't get current server API group list: the server has asked for the client to provide credentials"</span>
E1230 11:20:14.477604    4991 memcache.go:265] <span class="hljs-string">"Unhandled Error"</span> err=<span class="hljs-string">"couldn't get current server API group list: the server has asked for the client to provide credentials"</span>
error: You must be logged <span class="hljs-keyword">in</span> to the server (the server has asked <span class="hljs-keyword">for</span> the client to provide credentials)
</code></pre>
<p>The error message is not really meaningful and helpful, it can be caused by various factors, what I found out when I was googling for a solution, I found <a target="_blank" href="https://stackoverflow.com/questions/75406313/couldnt-get-current-server-api-group-list-the-server-has-asked-for-the-client">this SO question</a>, but the accepted answer was not really covering my case (and it links to AWS re:Post article which is hidden by a premium support paywall).</p>
<p>As my setup was supposed to be really quick, I took the example from <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_cluster#eks-cluster-with-eks-auto-mode">Terraform eks_cluster resource docs</a>.</p>
<p>I was diving deeper into the problem and then I finally found out what was going on - my user was missing permissions to the cluster!</p>
<p>So I added the following configuration:</p>
<pre><code class="lang-bash">data <span class="hljs-string">"aws_iam_user"</span> <span class="hljs-string">"jan_tyminski"</span> {
  user_name = <span class="hljs-string">"diving.devops"</span>
}

resource <span class="hljs-string">"aws_eks_access_entry"</span> <span class="hljs-string">"jan_tyminski"</span> {
  cluster_name  = aws_eks_cluster.my_cluster.name
  principal_arn = data.aws_iam_user.jan_tyminski.arn
  <span class="hljs-built_in">type</span>          = <span class="hljs-string">"STANDARD"</span>
}

resource <span class="hljs-string">"aws_eks_access_policy_association"</span> <span class="hljs-string">"jan_tyminski_AmazonEKSAdminPolicy"</span> {
  cluster_name  = aws_eks_cluster.my_cluster.name
  policy_arn    = <span class="hljs-string">"arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy"</span>
  principal_arn = aws_eks_access_entry.jan_tyminski.principal_arn

  access_scope {
    <span class="hljs-built_in">type</span> = <span class="hljs-string">"cluster"</span>
  }
}

resource <span class="hljs-string">"aws_eks_access_policy_association"</span> <span class="hljs-string">"jan_tyminski_AmazonEKSClusterAdminPolicy"</span> {
  cluster_name  = aws_eks_cluster.my_cluster.name
  policy_arn    = <span class="hljs-string">"arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy"</span>
  principal_arn = aws_eks_access_entry.jan_tyminski.principal_arn

  access_scope {
    <span class="hljs-built_in">type</span> = <span class="hljs-string">"cluster"</span>
  }
}
</code></pre>
<p><code>diving.devops</code> is not a real user, this is just <a target="_blank" href="https://www.instagram.com/diving.devops/">my Instagram profile</a> and <a target="_blank" href="https://www.youtube.com/@diving.devops">my YouTube channel</a>.<br />And this resolved my issue, I could finally start using <code>kubectl</code>, yay!</p>
<p>I have also answered the SO question with <a target="_blank" href="https://stackoverflow.com/a/79317511/1520842">my own solution</a>, based on <a target="_blank" href="https://stackoverflow.com/a/79267879/1520842">this answer</a> - I showed the Terraform way and added the <code>AmazonEKSAdminPolicy</code> and <code>AmazonEKSClusterAdminPolicy</code> there to be used with note regarding working on PoC in this scenario - so that readers are aware of this issue.</p>
<p>Of course this is just one of the possible scenarios for this error, that was my scenario - it may or may not work for you depending on the underlying issue you have - if that is not your case, <a target="_blank" href="https://stackoverflow.com/questions/75406313/couldnt-get-current-server-api-group-list-the-server-has-asked-for-the-client">I am linking again to the SO question</a> - check it for more solutions.</p>
]]></content:encoded></item><item><title><![CDATA[Is it possible to use a custom certificate issued by a private certificate authority with custom origin behind CloudFront?]]></title><description><![CDATA[One day an engineer in my team came up with this question.
There was no quick answer available, so I started digging and here's what I found.
CloudFront's docs on CNAMEs and HTTPS requirements say:

CloudFront supports all types of certificates issue...]]></description><link>https://tymik.me/is-it-possible-to-use-a-custom-certificate-issued-by-a-private-certificate-authority-with-custom-origin-behind-cloudfront</link><guid isPermaLink="true">https://tymik.me/is-it-possible-to-use-a-custom-certificate-issued-by-a-private-certificate-authority-with-custom-origin-behind-cloudfront</guid><category><![CDATA[AWS]]></category><category><![CDATA[cloudfront]]></category><category><![CDATA[Certificate Authority]]></category><category><![CDATA[Security]]></category><category><![CDATA[cache]]></category><dc:creator><![CDATA[Jan Tymiński]]></dc:creator><pubDate>Wed, 02 Oct 2024 20:00:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716417023165/78c8a3c5-f544-4087-81d1-54bbdd5cf161.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One day an engineer in my team came up with this question.</p>
<p>There was no quick answer available, so I started digging and here's what I found.</p>
<p><a target="_blank" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cnames-and-https-requirements.html#https-requirements-supported-types">CloudFront's docs on CNAMEs and HTTPS requirements</a> say:</p>
<blockquote>
<p>CloudFront supports all types of certificates issued by a trusted certificate authority.</p>
</blockquote>
<p><a target="_blank" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-cloudfront-to-custom-origin.html">CloudFront to Custom Origin setup</a> can only work with trusted third-party CAs:</p>
<blockquote>
<p>For origins other than Elastic Load Balancing load balancers, you must use a certificate that is signed by a trusted third-party certificate authority (CA), for example, Comodo, DigiCert, or Symantec.</p>
</blockquote>
<p>And in more details <a target="_blank" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-cloudfront-to-custom-origin.html#using-https-cloudfront-to-origin-certificate">CloudFront docs</a> explain that they support the same CAs that <a target="_blank" href="https://ccadb.my.salesforce-sites.com/mozilla/CACertificatesInFirefoxReport">Mozilla supports</a>:</p>
<blockquote>
<p>When CloudFront uses HTTPS to communicate with your origin, CloudFront verifies that the certificate was issued by a trusted certificate authority. CloudFront supports the same certificate authorities that Mozilla does. For the current list, see Mozilla Included CA Certificate List. You can’t use a self-signed certificate for HTTPS communication between CloudFront and your origin.</p>
</blockquote>
<p>AWS also emphasizes in the docs that:</p>
<blockquote>
<p>Important<br />If the origin server returns an expired certificate, an invalid certificate, or a self-signed certificate, or if the origin server returns the certificate chain in the wrong order, CloudFront drops the TCP connection, returns HTTP status code 502 (Bad Gateway) to the viewer, and sets the X-Cache header to Error from cloudfront. Also, if the full chain of certificates, including the intermediate certificate, is not present, CloudFront drops the TCP connection.</p>
</blockquote>
<p><strong>Summarising the above - No, it is not possible to set up CloudFront with Custom Origin using a certificate issued by a private Certificate Authority or Self Signed Certificate.</strong></p>
<h3 id="heading-but-why-do-i-need-to-know-that">But why do I need to know that?</h3>
<p>Because large enterprise companies may have internal CAs for internal purposes and they can be trusted on the company devices and you might be instructed to use a certificate issued by such CA by someone that is not aware of the above.</p>
<p>This short article can help you easily provide required documentation for others in case you are asked if that's possible or requested to implement own certificate in such scenario.</p>
]]></content:encoded></item><item><title><![CDATA[InvalidParameterValue: DBName must begin with a letter and contain only alphanumeric characters]]></title><description><![CDATA[❗
TL;DR: Don't use hyphen (-) for DBName.


And if you want to understand the reasons behind that, keep reading further.
You are creating an RDS instance with Terraform.
You have coded the instance, executed terraform plan and all looks great!
That's...]]></description><link>https://tymik.me/terraform-invalidparametervalue-dbname</link><guid isPermaLink="true">https://tymik.me/terraform-invalidparametervalue-dbname</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[SQL]]></category><category><![CDATA[error]]></category><dc:creator><![CDATA[Jan Tymiński]]></dc:creator><pubDate>Thu, 23 May 2024 13:00:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716416905203/30f22f5a-707d-493f-a309-f585d451a74d.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div data-node-type="callout">
<div data-node-type="callout-emoji">❗</div>
<div data-node-type="callout-text">TL;DR: Don't use hyphen (<code>-</code>) for DBName.</div>
</div>

<p>And if you want to understand the reasons behind that, keep reading further.</p>
<p>You are creating an RDS instance with Terraform.</p>
<p>You have coded the instance, executed <code>terraform plan</code> and all looks great!</p>
<p>That's great!<br />Now you can apply the changes!</p>
<p>So you run <code>terraform apply</code>, Terraform starts calling AWS API and suddenly you get an error:</p>
<pre><code class="lang-bash">InvalidParameterValue: DBName must begin with a letter and contain only alphanumeric characters
</code></pre>
<p>But why?<br />You wonder!</p>
<p>And I also wonder!</p>
<p>I have seen this a couple of times already...<br />But I always forget why this happens...<br />Mostly I solve this issue once at the beginning of setting up naming convention in the organization and then I forget about this issue...</p>
<p>Now recently I have also forgotten about that, but I was asked by my teammate to help them with the issue.</p>
<p>After a couple of minutes I have finally got it!</p>
<p>We were deploying Postgres!</p>
<p>The problem here is more complex than just AWS or Terraform constraints.<br />Terraform is supposed to manage AWS resources and AWS provides many different services, some of them are well known open source technologies that can have their own limitations.</p>
<p>Postgres doesn't allow to use hyphens (<code>-</code>) in the database name!<br />Only underscores (<code>_</code>)!</p>
<p>By the <a target="_blank" href="https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS">Postgres docs</a>:</p>
<blockquote>
<p>SQL identifiers and key words must begin with a letter (<code>a</code>-<code>z</code>, but also letters with diacritical marks and non-Latin letters) or an underscore (<code>_</code>). Subsequent characters in an identifier or key word can be letters, underscores, digits (<code>0</code>-<code>9</code>), or dollar signs (<code>$</code>). Note that dollar signs are not allowed in identifiers according to the letter of the SQL standard, so their use might render applications less portable.</p>
</blockquote>
<p>But you didn't set up Postgres?</p>
<p>Apparently <a target="_blank" href="https://dev.mysql.com/doc/refman/8.4/en/identifiers.html">MySQL doesn't seem to support hyphens</a> too:</p>
<blockquote>
<ul>
<li><p>Permitted characters in unquoted identifiers:</p>
<ul>
<li><p>ASCII: [0-9,a-z,A-Z$_] (basic Latin letters, digits 0-9, dollar, underscore)</p>
</li>
<li><p>Extended: U+0080 .. U+FFFF</p>
</li>
</ul>
</li>
</ul>
</blockquote>
<p>And <a target="_blank" href="https://learn.microsoft.com/en-us/sql/relational-databases/databases/database-identifiers?view=sql-server-ver16#rules-for-regular-identifiers">MS SQL Server</a> too:</p>
<blockquote>
<p>The first character must be one of the following:</p>
<ul>
<li><p>A letter as defined by the Unicode Standard 3.2. The Unicode definition of letters includes Latin characters from a through z, from A through Z, and also letter characters from other languages.</p>
</li>
<li><p>The underscore (_), at sign (@), or number sign (#).</p>
</li>
</ul>
</blockquote>
<p>I'm neither databases nor SQL expert, but looks like hyphen is not allowed as database name in the most popular SQL engines.</p>
<p>I cannot confirm that for sure, as I cannot find any sort of official SQL Language reference manual in the Internet, but it seems like it is a limitation of SQL for identifiers and database name is an identifier.<br />Vendors of SQL engines can follow the SQL standard more or less strictly, so there may be engines that could work with hyphens (<code>-</code>) in database name, but this would be an exception to what I have seen in past and during research for this post.</p>
<p>If, by any chance, you know more on that matter, feel free to share your thoughts in a comment!<br />I'm always happy to hear from you and learn from you!</p>
]]></content:encoded></item><item><title><![CDATA[Global Accelerator behind Cloudfront]]></title><description><![CDATA[Intro
I made a PoC that used Global Accelerator behind Cloudfront.
I haven't found any article regarding such solution being possible, AWS docs also didn't clearly state such scenario is possible.AWS Console didn't help - I couldn't select Global Acc...]]></description><link>https://tymik.me/global-accelerator-behind-cloudfront</link><guid isPermaLink="true">https://tymik.me/global-accelerator-behind-cloudfront</guid><category><![CDATA[AWS]]></category><category><![CDATA[cloudfront]]></category><category><![CDATA[globalaccelerator]]></category><dc:creator><![CDATA[Jan Tymiński]]></dc:creator><pubDate>Wed, 07 Feb 2024 09:06:20 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-intro">Intro</h2>
<p>I made a PoC that used Global Accelerator behind Cloudfront.</p>
<p>I haven't found any article regarding such solution being possible, AWS docs also didn't clearly state such scenario is possible.<br />AWS Console didn't help - I couldn't select Global Accelerator as the origin from the dropdown menu and entering it's domain didn't suggest anything, so I felt that adding custom origin is not possible - but it is!</p>
<p>I needed to find a way to serve 2 separate environments in different VPCs for the customer that wanted to bring up a brand new infrastructure based on EKS, but being able to use both old and new environment on the testing phase - to ensure that EKS environment works identically as the old application setup.<br />Application Load Balancer couldn't serve this purpose for different VPCs.</p>
<p>Cloudfront doesn't let you configure different origins with the same weight in a single Cloudfront Distribution - you can only configure a failover origin - that didn't support my case.</p>
<p><a target="_blank" href="https://aws.amazon.com/global-accelerator/">Global Accelerator</a> can route the traffic to various endpoints like <a target="_blank" href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html">Application Load Balancers</a>, <a target="_blank" href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html">Network Load Balancers</a> or <a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Instances.html">EC2 instances</a> directly.</p>
<p>I had all the things configured, so I only needed to add Global Accelerator between Cloudfront Distribution and ALBs and configure ACM Certificate for my testing domain used in this scenario.</p>
<h2 id="heading-how-to-set-this-all-up">How to set this all up</h2>
<p>We need the following resources:</p>
<ul>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html">Global Accelerator</a> (<a target="_blank" href="https://aws.amazon.com/global-accelerator/pricing/">pricing</a>)</p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html">ACM</a> Certificate for Global Accelerator</p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/cloudfront/">Cloudfront</a> (<a target="_blank" href="https://aws.amazon.com/cloudfront/pricing/">pricing</a>)</p>
</li>
<li><p>ACM Certificate for Cloudfront</p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html">Route53</a> records to route the traffic to Cloudfront</p>
</li>
<li><p>Route53 records to route the traffic to Global Accelerator</p>
</li>
<li><p>Ingresses for target environments (Application Load Balancers of both environments in my scenario)</p>
</li>
</ul>
<p>My traffic flow for this case looks like in the following diagram:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1698154302311/616810ab-1026-4cc6-8441-e568fb87f6e3.png" alt class="image--center mx-auto" /></p>
<p><em>I purposely didn't use AWS icons for the resources as I want you to focus on just the solution without being distracted by the icons - not everyone is fluent with them.<br />I might add another diagram using the icons in the future.</em></p>
<p>We need to do the following:</p>
<ol>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/global-accelerator/latest/dg/introduction-get-started.html">Set up Global Accelerator</a> - <a target="_blank" href="https://docs.aws.amazon.com/global-accelerator/latest/dg/introduction-accelerator-types.html">Standard accelerator</a> will work here</p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html">Configure the ACM Certificates</a> for the Global Accelerator - they need to be in the same region as the Global Accelerator</p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-creating-console.html">Set up Cloudfront distribution</a> (if you don't have one) and point it to Global Accelerator</p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html">Configure the ACM Certificates</a> for this Cloudfront distribution (they need to be in <code>us-east-1</code> as Cloudfront distributions are located there)</p>
</li>
<li><p>Add our endpoints to the Global Accelerator and configure weights for them - mine were the same, I wanted even amount of traffic to reach both environments - your scenario can differ and you can change the weights over time, depending on how confident you are with the new infrastructure.</p>
</li>
<li><p>Point our domain with Route53 (or any other DNS actually, but let's keep all in AWS) to Cloudfront (if you don't have one)</p>
</li>
<li><p>Test if everything works as expected</p>
</li>
</ol>
<p>The guide assumes you have some AWS experience already and not everything is clearly stated here.</p>
<p>Did you know such scenario is possible?<br />Do you have any other interesting cases for Global Accelerator?<br />I will be happy for your comments!</p>
]]></content:encoded></item><item><title><![CDATA[AWS Security Groups are tricky!]]></title><description><![CDATA[Recently I was configuring a security group for a service that uses TCP and UDP ports for ingress connectivity.
Security Groups are an Amazon Web Services feature for Amazon VPC (virtual private cloud) to set up a stateful firewall.
I used Terraform ...]]></description><link>https://tymik.me/aws-security-groups-are-tricky</link><guid isPermaLink="true">https://tymik.me/aws-security-groups-are-tricky</guid><category><![CDATA[AWS]]></category><category><![CDATA[vpc]]></category><category><![CDATA[securitygroups]]></category><category><![CDATA[Security]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Jan Tymiński]]></dc:creator><pubDate>Tue, 02 Jan 2024 12:30:03 GMT</pubDate><content:encoded><![CDATA[<p>Recently I was configuring a security group for a service that uses TCP and UDP ports for ingress connectivity.</p>
<p><a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/security-groups.html">Security Groups</a> are an Amazon Web Services feature for <a target="_blank" href="https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html">Amazon VPC (virtual private cloud)</a> to set up a stateful firewall.</p>
<p>I used Terraform (Terragrunt actually, but that's a detail) to configure the security group's ingress rules.</p>
<p>I wanted to be a smart guy, so I have set the protocol to <code>-1</code> which means "all protocols", so I didn't need to specify separate rules for TCP and UDP for the same port.</p>
<p>Here comes my surprise, according to the <a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AuthorizeSecurityGroupIngress.html">AWS documentation</a>:</p>
<blockquote>
<p>Use <code>-1</code> to specify all protocols. If you specify <code>-1</code> or a protocol other than <code>tcp</code>, <code>udp</code>, or <code>icmp</code>, traffic on all ports is allowed, regardless of any ports you specify.</p>
</blockquote>
<p>I was not aware of that and you might not be aware too!</p>
<p>Honestly, I would expect the AWS API to return an error, or at least a warning, when trying to set a configuration that is impossible to set - it either didn't happen or was silenced in the Terraform provider for AWS (I didn't verify, I suspect the AWS API doesn't give any feedback about that).</p>
<p>It was caught by my workmate who eventually made separate entries for TCP and UDP to get the configuration right - shortcuts, although tempting, are not always good solutions.</p>
]]></content:encoded></item><item><title><![CDATA[Terraform - MalformedPolicyDocument: The policy failed legacy parsing]]></title><description><![CDATA[I was affected by a bug in the AWS provider for Terraform recently.
I've got the following error when applying Terraform changes:
Error: creating IAM Policy (MyPolicyName): MalformedPolicyDocument: The policy failed legacy parsing

According to the A...]]></description><link>https://tymik.me/terraform-malformedpolicydocument-the-policy-failed-legacy-parsing</link><guid isPermaLink="true">https://tymik.me/terraform-malformedpolicydocument-the-policy-failed-legacy-parsing</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[IAM]]></category><category><![CDATA[Policy]]></category><category><![CDATA[#IaC]]></category><dc:creator><![CDATA[Jan Tymiński]]></dc:creator><pubDate>Sun, 10 Dec 2023 19:48:37 GMT</pubDate><content:encoded><![CDATA[<p>I was affected by a <a target="_blank" href="https://github.com/hashicorp/terraform-provider-aws/issues/22879">bug</a> in the AWS provider for Terraform recently.</p>
<p>I've got the following error when applying Terraform changes:</p>
<pre><code class="lang-bash">Error: creating IAM Policy (MyPolicyName): MalformedPolicyDocument: The policy failed legacy parsing
</code></pre>
<p><a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_permissions-to-switch.html#:~:text=to%20the%20user-,Creating%20or%20editing%20the%20policy,-A%20policy%20that">According to the AWS docs</a> the policy for <code>sts:AssumeRole</code> should look like the following:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
  <span class="hljs-attr">"Statement"</span>: {
    <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
    <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"sts:AssumeRole"</span>,
    <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:iam::account-id:role/Test*"</span>
  }
}
</code></pre>
<p>It works that way in AWS Console and with AWS CLI (I have personally tried that in AWS Console and <a target="_blank" href="https://stackoverflow.com/questions/75312843/terraform-iam-policy-creation-malformedpolicydocument-the-policy-failed-legac">people on the Internet reported that it works in AWS CLI</a>).</p>
<p>But not with Terraform!</p>
<p><a target="_blank" href="https://github.com/hashicorp/terraform-provider-aws/issues/22879">The bug in the provider</a> requires an array to be used for the <code>Statement</code>, like this:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
  <span class="hljs-attr">"Statement"</span>: [{
    <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
    <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"sts:AssumeRole"</span>,
    <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:iam::account-id:role/Test*"</span>
  }]
}
</code></pre>
<p>Regardless of a single statement being used in the policy.</p>
<p>When I was trying to google the error without the <code>Terraform</code> keyword, the results were showing that I may be missing the <code>Version</code> statement, which I copied from AWS docs, or that it could be missing <code>!Sub</code> directive in my CloudFormation Stack, which I didn't have at all.</p>
<p>I hope you find that useful and if you are affected too, feel free to <a target="_blank" href="https://github.com/hashicorp/terraform-provider-aws/issues/22879">upvote the issue</a>!</p>
]]></content:encoded></item><item><title><![CDATA[Terraform archived template provider and what now?]]></title><description><![CDATA[What has happened actually?
Terraform used to have a template provider in past. It was deprecated with Terraform 0.12 and archived on the 9th of October 2020.Although it was more than 3 years ago, some of you might still be using it!
It provided the ...]]></description><link>https://tymik.me/terraform-archived-template-provider-and-what-now</link><guid isPermaLink="true">https://tymik.me/terraform-archived-template-provider-and-what-now</guid><category><![CDATA[Terraform]]></category><category><![CDATA[cloud-init]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[infrastructure]]></category><dc:creator><![CDATA[Jan Tymiński]]></dc:creator><pubDate>Sun, 19 Nov 2023 20:46:30 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-what-has-happened-actually">What has happened actually?</h2>
<p>Terraform used to have a <code>template</code> <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/template/latest/docs">provider</a> in past. It was deprecated with Terraform 0.12 and <a target="_blank" href="https://github.com/hashicorp/terraform-provider-template">archived on the 9th of October 2020</a>.<br />Although it was more than 3 years ago, some of you might still be using it!</p>
<p>It provided the following data sources:<br /><code>template_file</code><br /><code>template_cloudinit_config</code><br />And they were used a lot in the past.</p>
<p><code>template_file</code> was used for prototyping files, e.g. with variables passed from the Terraform code to these files.</p>
<p><code>template_cloudinit_config</code> was used to generate the Cloud-init configuration consumable by other resources, e.g. <code>aws_instance</code> as <code>user_data</code> / <code>user_data_base64</code> or other resources that can be configured with Cloud-init.</p>
<p>There's quite a chance you have used it in past or will use it in the future, e.g. following some tutorials available on the Internet.</p>
<p>So did I, as I was unaware of the issue - and I would probably still be, but my workmate faced an issue that the binary for <code>template</code> provider is not available for Mac M1 and newer ARM-based Macs.<br />And there's a chance that you have also followed some old tutorials with the old code and are facing the same issues on your shiny new computer!<br />Or you are good, but your teammate has the issue!</p>
<p>But there's a solution for that - with newer Terraform we can use <code>cloudinit</code> <a target="_blank" href="https://registry.terraform.io/providers/hashicorp/cloudinit/latest/docs">provider</a> that comes with <code>cloudinit_config</code> data type that can utilize <code>templatefile()</code> function inside.<br />I have prepared an example on how to transform from the deprecated code to the modern one, as I haven't found a clear, simple example on how to do that.</p>
<h2 id="heading-the-code">The code</h2>
<p>This is an example of how did the code look like with the deprecated <code>template</code> provider:</p>
<pre><code class="lang-bash">data <span class="hljs-string">"template_file"</span> <span class="hljs-string">"example"</span> {
  template = file(<span class="hljs-string">"<span class="hljs-variable">${path.module}</span>/templates/example.yml"</span>)

  vars = {
    EXAMPLE_VAR = var.example_var
  }
}

data <span class="hljs-string">"template_cloudinit_config"</span> <span class="hljs-string">"config"</span> {
  gzip          = <span class="hljs-literal">true</span>
  base64_encode = <span class="hljs-literal">true</span>

  part {
    filename     = <span class="hljs-string">"example.yml"</span>
    content_type = <span class="hljs-string">"text/cloud-config"</span>
    content      = data.template_file.example.rendered
  }
}
</code></pre>
<p>Here's how the code would look like after refactoring to the <code>cloudinit</code> provider.</p>
<pre><code class="lang-bash">data <span class="hljs-string">"cloudinit_config"</span> <span class="hljs-string">"config"</span> {
  gzip          = <span class="hljs-literal">true</span>
  base64_encode = <span class="hljs-literal">true</span>

  part {
    filename     = <span class="hljs-string">"example.yml"</span>
    content_type = <span class="hljs-string">"text/cloud-config"</span>
    content = templatefile(<span class="hljs-string">"<span class="hljs-variable">${path.module}</span>/templates/example.yml"</span>, {
      EXAMPLE_VAR = var.example_var
    })
  }
}
</code></pre>
<p>I have tested the refactored code so you don't have to make it out on your own!</p>
<p>And yes, I also needed the refactored code for the infrastructure I manage.</p>
]]></content:encoded></item><item><title><![CDATA[RDS Auto Minor Version Upgrade does not work as you could probably expect!]]></title><description><![CDATA[I was quite surprised when I found out!
The RDS Auto Minor Version Upgrade doesn't always automatically update the minor version!
Especially since the documentation is misleading with this, saying at the beginning:

A minor engine version is an updat...]]></description><link>https://tymik.me/rds-auto-minor-version-upgrade-does-not-work-as-you-could-probably-expect</link><guid isPermaLink="true">https://tymik.me/rds-auto-minor-version-upgrade-does-not-work-as-you-could-probably-expect</guid><category><![CDATA[AWS]]></category><category><![CDATA[AWS RDS]]></category><category><![CDATA[upgrade]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Jan Tymiński]]></dc:creator><pubDate>Thu, 20 Jul 2023 07:05:22 GMT</pubDate><content:encoded><![CDATA[<p>I was quite surprised when I found out!</p>
<p>The RDS Auto Minor Version Upgrade doesn't always automatically update the minor version!</p>
<p>Especially since the <a target="_blank" href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Upgrading.html#USER_UpgradeDBInstance.Upgrading.AutoMinorVersionUpgrades">documentation is misleading</a> with this, saying at the beginning:</p>
<blockquote>
<p>A <em>minor engine version</em> is an update to a DB engine version within a major engine version. For example, a major engine version might be 9.6 with the minor engine versions 9.6.11 and 9.6.12 within it.</p>
<p>If you want Amazon RDS to upgrade the DB engine version of a database automatically, you can enable auto minor version upgrades for the database.</p>
</blockquote>
<p>And in <a target="_blank" href="https://aws.amazon.com/about-aws/whats-new/2018/12/amazon-rds-enhances-auto-minor-version-upgrades/">their blog post</a> regarding this feature:</p>
<blockquote>
<p>Auto Minor Version Upgrade is a feature that you can enable to have your database automatically upgraded when a new minor database engine version is available.</p>
</blockquote>
<p>And I haven't found any single hint regarding the updates needing to meet any additional conditions to be performed...</p>
<p>So when does the RDS Auto Minor Version Upgrade perform the upgrade?</p>
<p>There are two cases:</p>
<ol>
<li><p>The minor version that you currently have is completely deprecated by AWS</p>
</li>
<li><p>The minor version has the <code>AutoUpgrade: True</code> attribute set by AWS</p>
</li>
</ol>
<p>The <code>AutoUpgrade: True</code> attribute is set by AWS under some special circumstances when the new one contains very important cumulative bug fixes and an upgrade is absolutely necessary.</p>
<p><strong>How can you check which minor was tagged with</strong> <code>AutoUpgrade: True</code><strong>?</strong><br />By executing this command:</p>
<pre><code class="lang-bash">aws rds describe-db-engine-versions --region YOURREGION --output=table --engine YOURENGINE
</code></pre>
<p>You will see that most of the minor versions have the <code>AutoUpgrade: False</code> attribute set. Your current version should be either the newest with <code>AutoUpgrade: True</code> or the one you have manually chosen - depending on which is newer.</p>
<p><strong>Could this be automated?</strong><br />Probably yes - with Lambda function to seek the most recent minor version and calling RDS API to upgrade to that version in the next maintenance window.</p>
<p><strong>Is it worth automating?</strong><br />In most cases - no.<br />The most important bug fixes and security patches will be applied automatically.</p>
<p>Unless you are sure you need this or you already have a stable mechanism to reuse, you will most likely spend more time automating it than you will benefit from the most recent minor.<br />Remember, that there's always an alternative cost of doing something - not doing something else.</p>
<p>I haven't automated this as in my case the implementation costs outperform potential benefits.</p>
<p><strong>Will there be more storytelling?</strong><br />Maybe.<br />I think it doesn't fit into every blog post (like this one), but it may be continued for some in the future - stay tuned!</p>
<p>Sources:</p>
<p><a target="_blank" href="https://repost.aws/questions/QUqLHWCC0mRDiRLutC6pMHbA/rds-instances-do-not-auto-upgrade-minor-versions">https://repost.aws/questions/QUqLHWCC0mRDiRLutC6pMHbA/rds-instances-do-not-auto-upgrade-minor-versions</a></p>
<p><a target="_blank" href="https://repost.aws/questions/QUT0JuX6IhSAyXaSdQK5SW3A">https://repost.aws/questions/QUT0JuX6IhSAyXaSdQK5SW3A</a></p>
]]></content:encoded></item><item><title><![CDATA[Migrate from CloudFormation to Terraform]]></title><description><![CDATA[The Tale of the CloudFormation
A long time ago, in a land far, far away, there was a DevOps dojo.The dojo trained young padawans in DevOps craftsmanship.Many of them were ambitious and wanted to use their skills in practice as soon as possible.They w...]]></description><link>https://tymik.me/migrate-from-cloudformation-to-terraform</link><guid isPermaLink="true">https://tymik.me/migrate-from-cloudformation-to-terraform</guid><category><![CDATA[Terraform]]></category><category><![CDATA[AWS]]></category><category><![CDATA[cloudformation]]></category><category><![CDATA[Devops]]></category><category><![CDATA[infrastructure]]></category><dc:creator><![CDATA[Jan Tymiński]]></dc:creator><pubDate>Tue, 13 Jun 2023 22:17:03 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-the-tale-of-the-cloudformation">The Tale of the CloudFormation</h2>
<p>A long time ago, in a land far, far away, there was a DevOps dojo.<br />The dojo trained young padawans in DevOps craftsmanship.<br />Many of them were ambitious and wanted to use their skills in practice as soon as possible.<br />They were given quests.</p>
<p>Quests varied in their toughness, but many of them had a common ground - the fate of the young padawans.<br />They were sent to accomplish missions and fight evil creatures in the lands of the AWS CloudFormation.</p>
<p>AWS CloudFormation is one of the toughest lands in the Clouds universe.<br />Many adventurers have failed to accomplish quests in these lands.<br />Many have lost their lives fighting the creatures called Stacks.<br />Many have lost their minds to the riddles of Change Sets.<br />New DevOpses were coming, they were moving further, but with all the burden they couldn't conquer the lands of the Cloudformation.</p>
<p>One day the dojo sent two of their most experienced DevOpses.<br />They have seen the scenes nobody wants to see.<br />They have seen the defeat of their mentees.<br />They have seen a disaster.</p>
<p>With a plan, they started picking up battles, tackling them with a balance between efforts and benefits.<br />It took months, but the strategy paid off.<br />The Team of Two DevOpses visited hundreds of villages of the CloudFormation lands.<br />Although they didn't conquer the kingdom, they have secured peace in the Clouds universe.</p>
<p>One day a team of three DevOps Ninjas arrived at the CloudFormation lands.<br />They came from the kingdom of the Terraform.<br />They were not pleased with what they found.</p>
<p>The DevOps Ninjas have tightly cooperated with the Dojo Masters.<br />They needed to understand all the situation and politics of the CloudFormation lands.<br />Months passed by...<br />The Dojo Masters left the CloudFormation lands, leaving them in the hands of the brave DevOps Ninjas to continue on the journey.<br />And months passed by...</p>
<p>The DevOps Ninjas were already sure that only with the powers of the Terraform Kingdom they can bring everlasting peace to these lands.<br />The authority of the CloudFormation Kingdom must come to an end.</p>
<p>And so the biggest war in the universe began!</p>
<h2 id="heading-migrating-the-infrastructure">Migrating the infrastructure</h2>
<p>I hope you enjoyed this little introduction, now let's get to the technical part.</p>
<p>The process is pretty simple, I have made a simple PoC with a CloudFormation template <a target="_blank" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/sample-templates-applications-eu-west-1.html">provided by AWS</a>.<br />I have chosen <mark>WordPress scalable and durable</mark> template to ensure the behaviour around the RDS database that was crucial for me.<br />I recommend you to try it on your own to ensure what will happen, test for a scenario you have - you might want to seek for another template to better simulate your case.</p>
<p>The process is pretty simple and consists of two separate processes described by AWS and I will list it with links to AWS articles so you can easily ensure the steps:</p>
<ol>
<li><p>Set up <code>DeletionPolicy: Retain</code> to all resources you want to retain when CloudFormation Stack is removed (<a target="_blank" href="https://repost.aws/knowledge-center/delete-cf-stack-retain-resources">link</a>)</p>
</li>
<li><p>Upload new CloudFormation Stack definition</p>
</li>
<li><p>Import these resources to your Terraform code</p>
</li>
<li><p>Delete the CloudFormation Stack (<a target="_blank" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-delete-stack.html">link</a>)</p>
</li>
</ol>
<p>And that's it!</p>
<p>Deleting CloudFormation Stack might fail if there are resources in it that miss the <code>DeletionPolicy: Retain</code> and they are connected with resources retained, e.g. there's a subnet without this attribute that is related to an RDS which is retained - AWS will try to remove this subnet and fail as it cannot be removed as long as RDS uses it - this situation is good for us, we will not accidentally remove resources that we need for other resources to work.<br />The CloudFormation Stack can then be forcibly removed, ignoring the fact it cannot delete some resources.</p>
<p>I hope you find this post useful and that you enjoyed the little story in the beginning - the fairy tale is based on a real life situation.</p>
]]></content:encoded></item></channel></rss>