Brandon Checketts

Web Programming, Linux System Administation, and Entrepreneurship in Athens Georgia

AWS CodeDeploy Troubleshooting

CodeDeploy with AutoScalingGroups is a bit of a complex mess to get working correctly. Especially with an app that has been working and needs to be updated for more modern functionality

Update the startups scripts with the latest versions from https://github.com/aws-samples/aws-codedeploy-samples/tree/master/load-balancing/elb-v2

I found even the latest scripts there still not working. My instances were starting up then dying shortly afterward. CodeDeploy was failing with the error


LifecycleEvent - ApplicationStart
Script - /deploy/scripts/4_application_start.sh
Script - /deploy/scripts/register_with_elb.sh
[stderr]Running AWS CLI with region:
[stderr][FATAL] Unable to get this instance's ID; cannot continue.

Upon troubleshooting, I found that common_functions.sh has the get_instance_id() function that was running this curl command to get the instance ID


curl -s http://169.254.169.254/latest/meta-data/instance-id

Running that command by itself while an instance was still running returned nothing, which is why it was failing.

It turns out that newer instances use IMDSv2 by default, and it is required (no longer optional). With that configuration, this curl command will fail. In order to fix, this, I replaced the get_instance_id() function with this version:

# Usage: get_instance_id
#
#   Writes to STDOUT the EC2 instance ID for the local instance. Returns non-zero if the local
#   instance metadata URL is inaccessible.

get_instance_id() {
    TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600" -s -f)
    if [ $? -ne 0 ] || [ -z "$TOKEN" ]; then
        echo "[FATAL] Failed to obtain IMDSv2 token; cannot continue." >&2
        return 1
    fi

    INSTANCE_ID=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/instance-id -s -f)
    if [ $? -ne 0 ] || [ -z "$INSTANCE_ID" ]; then
        echo "[FATAL] Unable to get this instance's ID; cannot continue." >&2
        return 1
    fi

    echo "$INSTANCE_ID"
    return 0
}

This version uses the IMDSv2 API to get a token and uses that token to get the instance-id

With that code replaced, the application successfully registered with the Target Group and the AutoScaling group works correctly

Alternatively (and for troubleshooting), I was able to make IMDSv2 Optional using the AWS Console, and via CloudFormation with this part of the Launch Template:

Resources:
  MyLaunchTemplate:
    Type: AWS::EC2::LaunchTemplate
    Properties:
      LaunchTemplateName: my-launch-template
      LaunchTemplateData:
        ImageId: ami-1234567890abcdef0
        InstanceType: t4g.micro
        MetadataOptions:
          HttpTokens: optional

ShipStation’s Auctane Fulfillment Network for 3PLs

At Data Automation, we recently built some stuff for use in ShipStation’s “Auctane Fulfillment Network”, as it is called in the API.
It looks like they refer to it as Send Orders To Fulfillment in their documentation

This is a fairly clever innovation where if the Seller has ShipStation, and their Third Party Logistics (3PL) provider also uses ShipStation, they can essentially “Send To Fulfillment”, meaning it makes a copy of the order in the 3PL’s ShipStation account for them to fulfill the order. Once the 3PL fulfills the order, it copies the shipment information, including Carrier, Service, Tracking Number, and Estimated Delivery Date, back to the seller’s ShipStation account.

It looks like it is still a little convoluted to set-up. The 3PL and Seller need to coordinate some things back and forth via email to begin. But once set up, the seller can simply click the “Send to Fulfillment” button inside their ShipStation account to assign the order to their 3PL. You can also set up automation rules to make that happen automatically depending on the sales channel, sku, and other things

From a technical perspective, the order is duplicated into the 3PL’s system, but not quite exactly the same as if it was pulled from the channel directly.

Its always nice when working with a pleasant customer to troubleshoot new things. With their help we got this sorted out and running smoothly now for our Amazon Custom integration with ShipStation at Data Automation

Thinking Outside the Box – Helping with a Tree Service Business

A good friend of mine owns Sherwood Forest Tree Service which mostly does cutting down and pruning trees in area in Northeast Georgia. That’s outside what I normally work with, but I’ve enjoyed learning about and helping with his business. I’m seeing a lot of opportunities to use some technology in different aspects of his business.

I’ve looked at a list of all of his past customers, and am looking through property data to try and identify common things that will help define his ideal customer. Then we might be able to target more customers like those that he’s already worked with.

Also, I’m wondering about the ability to generate an estimate for tree removal if you just send a photo of it and have AI provide some information about it like the species of tree, estimated height and trunk diameter.

I’m excited to see if it turns in to some other projects.

SSH Key Best Practices for 2025 – Using ed25519, key rotation, and other best practices

Apparently Google thinks I’m an expert at SSH Keys, so I’m providing an update to my previous post two years ago with some slight updates.

You can tell quite a bit about other IT professionals from their Public SSH Key! I often work with others and ask for their key when granting access to a machine I control. Its a negative sign when they ask how to create one. If they provide one in the PuttyGen format, I know they’ve been asked for their key exactly once. A 2048 bit or smaller RSA key means they haven’t generated one in a long time. If they send me an ed25519 key with a comment other than their machine name, I feel confident that they know what they are doing.

For reference, a 4096-bit RSA key will be in this format:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDowuIZFbN2EWbVwK9TG+O0S85yqr7EYc8Odv76H8+K6I7prrdS23c3rIYsKl2mU0PjOKGyyRET0g/BpnU8WZtDGH0lKuRaNUT5tpKvZ1iKgshdYlS5dy25RxpiVC3LrspjmKDY/NkkflKQba2WAF3a5M4AaHxmnOMydk+edBboZhklIUPqUginLglw7CRg/ck99M9kFWPn5PiITIrpSy2y2+dt9xh6eNKI6Ax8GQ4GPHTziGrxFrPWRkyLKtYlYZr6G259E0EsDPtccO5nXR431zLSR7se0svamjhskwWhfhCEAjqEjNUyIXpT76pBX/c7zsVTBc7aY4B1onrtFIfURdJ9jduYwn/qEJem9pETli+Vwu8xOiHv0ekXWiKO9FcON6U7aYPeiTUEkSDjNTQPUEHVxpa7ilwLZa+2hLiTIFYHkgALcrWv/clNszmgifdfJ06c7pOGeEN69S08RKZR+EkiLuV+dH4chU5LWbrAj/1eiRWzHc2HGv92hvS9s/c= someuser@brandonsLaptop

And for comparison, an ed25519 key looks like this:

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBLEURucCueNvq4hPRklEMHdt5tj/bSbirlC0BkXrPDI someuser@ip-172-31-74-201

You’ll notice in both of these, the first characters contain the key type. The middle section with all of the random characters contain the base-64 encoded public key. And at the end is a comment that is intended to identify the user to whom it belongs.

The ed25519 key is much shorter than an RSA keys, so if you’ve never seen one before, you might think it is less secure. But this key type is newer, and uses a totally different, more complex algorithm. Although the 256-bit ed25519 key has fewer characters, it is, for all practical purposes, as secure as the 4096-bit RSA key above. The ed25519 algorithm is more computationally complex, so it requires fewer bits for a similar level of security.

The ed25519 algorithm is based on a specific formula for an ellipse instead of prime numbers like the RSA algorithm. It has been in wide use for ~10 years, is supported by all modern software, and as such is the current standard for most professional users. Creating a key is simple with the ssh-keygen command. But before jumping to the actual command, I wanted to also explain a few other tips that I use, and think others should adopt as well.

Keys should created by individuals, not issued to groups

You should never share your private key with anybody. Ever. If a key is ever shared, you have to assume that the other party can impersonate you on any system in which it is used.

I’ve been a part of some teams which create a new server and create a new key to access that server, and share they new key with everybody who needs to accss the machine. I think this practice stems from AWS or other providers who create an SSH key for you, along with a new machine, and the user just continuing the practice. I wish they’d change that.

That’s the backwards way of thinking about it. Individuals should own their own keys. They should be private. And you can add multiple public keys to resources where multiple people need access. Again, I wish AWS and others will allow this more easily instead of allowing only a single key. You then revoke access by removing the public key, instead of having to re-issue a new key whenever the group changes. (Or worse, not changing the key at all!)

Rotating your SSH keys

You should rotate your SSH keys regularly. The thought process here is that if you have used the same key for a long time, and then your laptop with your private key gets lost, or your key compromised, every machine that you’ve been granted access to over that time is potentially at risk, because administrators are notoriously bad about revoking access. By changing out your key regularly, you limit the potential access in the case of a compromised key. Generating a new SSH key also ensures that you are using more modern algorithms and key sizes.

I like to create a new SSH key about every two years. To remind my self to do this, I embed the year I created the key within its name. My last key was created in March 2023, which I have named [email protected]. I’m creating a new key now, at the beginning of 2025, which I’ll name with the current year. Each time I use it, I’m reminded when I created the key, and if it gets to be around 2 years, and I have some time free, I’ll create a new key. Of course I keep all of my older keys in case I need access to something I haven’t accessed for a while. My ssh-agent usually has my two most recent keys loaded. If I do need to use an older one, it is enough of a process to find and use the old one, that the first thing I’ll do is update my key as soon as I get into a system where an old key was needed.

Don’t use the default ssh-keygen comment

I also suggest that you make the SSH key comment something meaningful. If you don’t provide a comment, most ssh-keygen implementations default to your_username@you_machine name which just might be silly or meaningless. In a professional setting, it should clearly identify you. For example BrandonChecketts as a comment is better than me00101@billys2017_macbook_air. It should be meaningful both to you, and to whomever you are sharing it.

I mentioned including the creation month above, which I like to include in the comment because when sharing the public key, it subtly demonstrates that I am security conscious, have rotated it recently, and I know what I’m doing. The comment at the end of the key can be changed without affecting its functionality, so if I might change the comment depending on who I’m sharing it with. When I receive a public key from somebody else that contains a generic comment, I often change the comment to be include their name or email address so I can later remember to whom it belongs to.

Always use a passphrase

Your SSH key is just a tiny file on disk. If your machine is ever lost, stolen, or compromised in any way by an attacker, the file is pretty easy for them to copy. Without it being encrypted with a pass phrase, it is directly usable. And if someone has access to your SSH private key, they probably have access to your bash or terminal history and would know where to use it.

As such, it is important to protect your SSH private key with a decent pass phrase. To avoid typing your pass phrase over and over, use the SSH-Agent, which will remember it for your session.

Understand and use SSH-Agent Forwarding when applicable

SSH Agent Forwarding allows you to ssh into one machine, and then transparently “forward” your SSH keys to the that machine for use authenticating into a machine past it. I most often use this when authenticating to GitHub from a remote machine. Using Agent forwarding means that I don’t have to copy my SSH Private key onto the remote machine in order to authenticate to GitHub from there.

You shouldn’t, however, just blindly use SSH Agent Forwarding everywhere. If you access a compromised machine where an attacker may have access to your account or to the root account, you should NOT use agent forwarding since it is possible for them to intercept your private key. I’ve never seen this exploited, but since it is possible, you should only use SSH Agent Forwarding to systems which you trust.

The ssh-keygen Command

With all of the above context, this is the command you should use to create your ed25519 key:

ssh-keygen -t ed25519 -f ~/.ssh/your-key-filename -C "your-key-comment"

That will ask you for a pass phrase and then show you a randomart image that represents your public key when it is created. The randomart is just a visual representation of your key so that you can see it is different from others.

 $ ssh-keygen -t ed25519 -f ~/.ssh/[email protected] -C "[email protected]"
Generating public/private ed25519 key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ~/.ssh/[email protected]
Your public key has been saved in ~/.ssh/[email protected]
The key fingerprint is:
SHA256:HiCF8gbV6DpBTC2rq2IMudwBc5+QuB9NqeGtc3pmqEY brandon+2025@roundsphere
The key's randomart image is:
+--[ED25519 256]--+
| o.o.+.          |
|  * +..          |
| o O...          |
|+ A *. .         |
|.B % .  S        |
|=E* =  . .       |
|=+o=    .        |
|+==.=            |
|B..B             |
+----[SHA256]-----+

Obsessive/Compulsive Tip

This may be taking it too far, but I like to have a memorable few digits at the end of the key so that I can confirm the key got copied correctly. One of my keys ends in 7srus, so I think of it as my “7’s ‘R’ Us” key. You can do that over and over again until you find a key that you like with this one-liner:

rm newkey; rm newkey.pub; ssh-keygen -t ed25519 -f ./newkey -C "[email protected]" -N ''; cat newkey.pub;

That creates a key without a passphrase, so you can do it over and over quickly until you find a public key that you “like”. Then protect it with a passphrase with the command

ssh-keygen -p -f newkey

And obviously, then you rename it from newkey and to newkey.pub a more meaningful name.

Replacing your public key when you use it

As you access machines, make sure to add your new key and remove old keys from your ~/.ssh/authorized_keys file. At some point, you should remove your previous key from your ssh-agent and you’ll be forced to use the old key to get in, and replace it with the new key.

Is that complete? What other tips should others know about when creating an SSH Key in 2025 and beyond?

Practical Financial Advice for Those Terminally Ill (and their immediate families)

In break from my usual business and technical content, this is something I’ve thought of posting for a while. I think this may be useful to some in this unfortunate situation, and perhaps posting it publicly may help somebody else.

My wife passed away about 6 years ago. She was diagnosed with a Brain tumor about 8 years before that. This scenario obviously sucks. But I’ve learned some things that I think are worth sharing for others who are in that situation.

Life Insurance Options

You’ll probably be unable to get any traditional life insurance for obvious reasons.

Group Life Insurance through one of your employers

I owned a business at the time, and we had always offered Group Life Insurance to all employees. This type of insurance for employers is relatively inexpensive if they pay for it for all employees. It requires no questions of employees and everybody and their spouses are available. I want to say it cost something like $3/month per employee for $25,000 of coverage for the employee or spouse.

If you own a business with employees, this may be a way to get some life insurance when you are unable to otherwise. If you go through a broker, ask if they can provide group life insurance. If you work for a small business, maybe you could speak with the owner or whoever manages benefits to see if it something your company could offer.

Check if you can increase any existing policies

If you have any existing Life Insurance policies, check with your insurance agent if it has an option to add additional insurance. This is sometimes called an “Option to Purchase Additional Insurance”, or a “Future Purchase Option”. We had one policy that had an option every 5 years to add additional insurance without any questions asked.

Social Security Survivors Benefits for Children

The United States Social Security program has a survivor benefit available to minor children of a parent who dies. The benefit is calculation is complicated, but for my wife it was exactly 40% of her highest W2 wages. For example, if you make $60,000/year in some prior year, after you die, your surviving children will receive about $2,000/month ($24,000/year) until they reach 18 years old or graduate from high school. That amount is increased annually with inflation. This can add up to be a very significant amount!

If you have multiple children, when the first one becomes ineligible, by turning 18 or graduating from high school, the same total amount is split among the remaining minor children. We had four minor children, each one receiving about $500/month ($2,000/month total). When the oldest graduated High School, the remaining three each received about $670/month (still $2,000/month total).

Technically the children receive the payments, but the surviving parent is custodian of it and has to use it for the benefit of the children. If the children are living with the surviving parent, the Social Security Administration doesn’t require any documentation that the funds are being used for their shelter and food. Other circumstances probably require proof that the funds are used for approved purposes.

Calculation of the Survivor’s Benefit Amount

I’m unable to find any actual calculator or statement online that says exactly how the benefit amount is calculated. In my experience, it was based off the HIGHEST single year of W2 wages that my wife earned in the prior years. In her highest single year, she made $60,000, and the annual benefit for the surviving children started at $24,000/year ($2,000/month). This is different than Social Security Retirement benefits, which are based on a formula somehow averaging the highest 35 years.

I would love to see how others benefits are calculated, to confirm if it is based on the highest single year of earnings also. If so, it is to your childrens’ future benefit if you can work to earn as much as possible in a single calendar year. By way of example, if you can add $10,000 to your W2 wages, your children will receive and additional $4,000 per year until the youngest turns 18 years old. Perhaps you could work with your employer, customers, and others to focus on earnings for one calendar year.

Other Ideas

If you have any other questions or suggestions to share, feel free to comment below

Several AWS Step Function Events Should be Classified as Data Events

At DataAutomation, we use the AWS Step Functions service pretty extensively. It provides a pretty nice, modular framework for us to build custom workflows for customers. We do millions of requests per day to the service. We also use AWS GuardDuty for threat detection.

GuardDuty monitors the CloudTrail log for odd things happening on your AWS Account. It also monitors for suspicious network traffic, and potential weaknesses on your EC2 instances, among other things. I actually like Guard Duty quite a bit.

I have one complaint about this combination of AWS usage though. With our high volume usage of AWS Step Functions, all of those common State Machine usage events like creating tasks, executing the tasks, and deleting them all go through CloudTrail, and thus through Guard Duty for monitoring. GuardDuty can get kindof expensive for this since we’re generating hundreds of thousands or millions of events per day.

S3 and DynamoDB are similar in this respect. When using those services, you can quickly rack up millions of events very quickly. They have a solution that classifies events as either “Management Events” or “Data Events”. Management Events include things like Creating a new S3 Bucket, or changing policies on the bucket. Data events include things like adding, reading or deleting items from the bucket. On the DynamoDB side, Management Events include events like Creating or modifying tables, or access to the tables, while Data Events include things like reading or writing to the tables.

Step Function does include one Data Event, that is InvokeHTTPEndpoint. However, I’d like for the Step Functions team to consider making the events related to “Using” the service into data events as well. This list of events should include all of the Execution events (StartExecution, StartSyncExecution, RedriveExecution, ListExecutions, DescribeExecution, GetExecutionHistory, DescribeStateMachineForExecution, StopExecution) and the Task Token events (SendTaskSuccess, SendTaskHeartbeat, and SendTaskFailure), as well as the GetActivityTask event

I have created an AWS support ticket to try and explain this in as much detail as possible to the Step Functions team. I think it gets lost inside AWS because the effects are not readily apparent to the Step Machine team, since the cost ends up associated with Guard Duty. If you have similar problems, I encourage you to create similar ticket with detailed explanation and that it get directed to the Step Functions team, who I believe is the most qualified team to make this change.

Scripts published for calculating Sales Tax in Texas from Stripe Transaction exports

Following up from my previous complaints about Texas collecting back Sales Tax for Saas companies,, I put quite a bit of time into writing some PHP scripts to calculate the Texas Sales Tax due and complete their forms.

Looking through the actual Stripe transaction detail and determining the sales tax due will save our company tens of thousands of dollars from the original estimated figures that our accountant calculated.

I’m releasing some of the PHP scripts that I wrote for this on GitHub in case anybody else may find them useful. They are pretty plain PHP, so hopefully are straightforward enough to follow.

Head on over to https://github.com/bchecketts/stripe-sales-tax-aid if that would be useful for you. Comment below or make Github Issues if you have something to share.

Texas Collecting Sales tax on SaaS in 2017 is like …

Imagine you were on a road trip in 2017 driving a big RV across the United States from coast to coast. You drove through 200 miles in a corner of Texas, got some gas and a meal there and didn’t think much of it.

Now, in 2024, you’re taking your car on a road trip again and again drive through a corner of the Lone Star State. As you cross the state line, you come upon a tool booth. You’re surprised at the $50 toll, but you recall some random news that states are starting to do this. You pull out your credit card to pay, and the attendant informs you that you owe and additional $1,000 for an unpaid toll when you were last here, 7 years ago.

You comment that you don’t recall it being a toll road back then, and he informs you that it is based on a law from 2008, so it was clearly in-place in 2017. Again, you say that you don’t recall seeing a sign or speeding through a tool booth. He then comments that they didn’t actually have the tool booth built back then. But he shows you a picture from 2017 of a 2-foot tall sign, far off the road that includes 4 paragraphs of the state statute and has an address to mail payment. The sign is partially obscured behind a tree.

You comment that it seems unreasonable to expect somebody from out of state, driving through at highway speeds to be able to read and obey this obscure sign. As you’ve driven around the country, even back in 2017, usually there is ample notice, a toll booth, as they have now, and a reasonably easy way to pay the toll.

The response is that the law is the law. Ignorance doesn’t mean you don’t have to obey it. You can’t proceed through the state. You can pay the toll now, or set up a payment plan. You have the option of turning around and backtracking 200 miles to go an alternate route, but now that they have your picture and know who you are, they may be able to just take the money from your bank account.

That’s a pretty accurate comparison to the State of Texas’s requirement to collect back taxes on Software as a Service from 2017. Only in the past couple years have software companies been aware that SaaS is now taxable in a few states. It would be extremely controversial to collect a toll, like in this example, yet thats a pretty close comparison to what businesses are having to do with Sales and Use Tax there now. It seems there is no leniency, despite their lack of any notice or general instruction, to somebody who they would not have reasonably expected would be aware of this requirement.

UPDATE: I’ve published some of the scripts used for complying with this at https://www.brandonchecketts.com/archives/sales-tax-from-stripe-transactions-report

Simplify Amazon Custom and Amazon Handmade fulfillment with ShipStation and Data Automation’s SyncPersonalized

DataAutomation builds connectors between all kinds of E-Commerce Tools. They integrate with all of the major E-Commerce platforms and tools. A lot of these are custom integrations, but sometimes we run across one that is useful to a lot of people.

One of them that we’ve been having a lot of success with has to do with Amazon sellers who use the Amazon Handmade or Amazon Custom programs. Amazon Custom sells things that are personalized for the buyer. Think buying a T-Shirt on Amazon, and having your name printed on it. Or buying a ring with a customized engraving on it. Amazon Handmade is for products that are made by hand, sortof their equivalent of Etsy.

A lot of Sellers use ShipStation or Veeqo to help them fulfill their orders. These systems print out packing slips, help to buy the right postage, and print out the printing and shipping labels for the sellers. They both have native integrations with Amazon, but because the Amazon API makes it very hard to retrieve the information about the customizations, their integrations don’t retrieve them. That means that tjese sellers have to go back and forth between their shipping/fulfillment system and Amazon Seller Central to find the details for each order. That’s cumbersome, time consuming, and leads to problems.

Do Data Automation built a useful application that supplements the Amazon Order information in ShipStation and Veeqo so that it includes the Customizations that the buyer entered during check-out. The customizations can be printed on packing slips and are visible inside these shipping systems. That can greatly simplify the sellers’ workflow and help them to avoid errors.

I just finished building this into a self-service application, so that sellers can sign up, connect their systems, subscribe, and be up-and-running in just a few minutes. Its called SyncPersonalized and information about is is available on DataAutomation’s Amazon Custom & Handmade page.

Clever Trick to clear a negative DNS Cache

I just discovered a clever trick that can be used to clear a negative DNS Cache entry. I sometimes need to do this if I try to use a DNS resource before I’ve actually created it. For example, when I start a new project, I often intend to use the hostname ‘app.mynewproject.com’ to run the application website. If I try to open this in a browser before creating the DNS entry, many DNS servers will cache the negative response (ie: app.mynewproject.com does NOT exist, so don’t ask the upstream server for it again), sometimes for a long time.

I’ve found that I can often create a CNAME record that points to the desired resource. When looking up the CNAME record, the authoritative server also sends the response for the record that was thought to be invalid. This must clear the cache, so a subsequent request to the previously negatively cached record then works. This is much faster than waiting for it to expire!

« Older posts

© 2025 Brandon Checketts

Theme by Anders NorenUp ↑