Dual boot windows 10 and linux on same hard drive

Here is the quick guide to setup Windows and Linux in two different SDD. Disable legacy boot in BIOS. Make sure your boot mode is set to UEFI. Install Windows on HDD/ SDD 1 ( /dev/sda ). Windows will…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Build a custom AWS CloudWatch metrics to monitor your Express HTTP connections

Monitoring is key for troubleshooting, here is how I built a custom AWS Cloudwatch metrics to monitor and tune HTTP connection in nodeJS and express.

I was initially troubleshooting a crash on an ECS (EC2 type) nodeJS task crashing periodically. The stack is a classic monolithic MERN (Mongo Express React Node).

The current setup was allowing 256MB ram on a t2.small type for the node app (which to me should be enough for a basic web app, without much processing needed).

Usual troubleshooting practices led me to look at the ALB metrics. And indeed I was receiving some spikes up to 900 requests/min over 15mn span according to Cloudwatch. The source of these calls was a bulk sync process every week via REST API which I didn’t expect to dump so many requests at once…

Looking at the ECS log I quickly found that docker killed the container because it exceeded 85% mem hard limit.

This is true for any system: more requests means more resources. But sometimes we want to limit request flow to limit high resource usage and cost.

As an experienced J2EE developer, I know which metrics to look at in tomcat, but what about nodeJS/express?

What are the max HTTP connection limit and pools? When should I scale up? Should I vertically scale?

The ALB connections give metrics for the whole cluster and per target group. But what about per container?

I did some googling, few relates about Express HTTP tuning. It’s like if most dev goes all-in with node/Express HTTP server without having to care for default connections limit… From someone coming from a Java and Tomcat world that seemed too nice to be true. I needed to understand the truth.
And indeed, surprisingly some articles mention that nodeJS is elastic: Means by default there is no max limit, in case of an HTTP request spike, it will just open as many connections as it can.

By default, the 2 factors used as a max request limit on a node app are based on:

If you reach the open file descriptor Express will simply reject new connections, so make sure to increase your maximum open file descriptor on your system. The default is usually 1024. Check the value with ulimit -a

This limit was already increased in my case. If not you can increase with

But if you reach the container max memory limit, docker will kill your app once you reach a given memory threshold. If your app is not in a cluster and doesn’t auto-scale or doesn’t have an auto-restart you may want to set a lower limit like this:

Express will reject new incoming connections and you will get this error from curl:

This is a quick workaround that still doesn’t support our use case. In a spike, the app will reject connections and requests will be lost. With a connection pool like in tomcat, we could delay the request but unfortunately, I didn’t find any with express. But at least the app won’t crash.

Lowering the HTTP request rate from the caller could have been an option, but I don’t own that third-party service. In the end, I chose to change the architecture to send the request from my end.

On a production system, it’s important to know the limit of your app to adjust system requirements accordingly before go-live and allow autoscaling based on some key metrics you have defined.

As we said early by default the app will use either as much as memory as available and open one file descriptor per HTTP socket. Therefore, usual graphs to monitor per container are memory, total open file descriptor, CPU (if your CPU is 100% busy all the time processing request will take more time and it will hang more connections) and finally HTTP connections.

I assume you already know how to display the first 3 metrics on your favorite monitoring system. But what about HTTP connections?

Tomcat provides JMX monitoring built-in that conveniently allows you to track connections requests, threads, and pools. But nothing similar exists in nodeJS express HTTP server.

A quick option would be to rely on your reverse proxy metrics (Nginx, ALB or apache).

But, if you have more than one node behind the reverse proxy it will give you the total request for your whole cluster. What if you want to monitor for each node in your cluster?

Here comes the purpose of my article: Create your custom CloudWatch HTTP metrics.

The only way to get opened HTTP connections at a given time with Express is by creating a server object from the builtin node httpmodule

Then calling this callback function:

And make sure to update your code to useserver.listen(... instead of your expressapp.listen(...

Note: This function only gives a connection count at a given time, not an average nor a total count from startup.

Now, to be able to monitor this metric you need to push the value every X minute/second to a monitoring system.

As the stack is hosted in AWS, I’ll be showing how to push this value into CloudWatch.

2. Add an IAM policy to grant your app to push metrics (in my case on the ECS role):

That’s it!

3. Start the app node index.jsand keep it running.
Refresh your app page in your web browser or run a curl in a loop to generate traffic.

You should see this in the log:

Note: If you have very few requests all closed within a few ms from the beginning fo the second, it’s likely that the count stays at 0 as the cron displays the value at the beginning of the second.

Browse to your CloudWatch “Metric” tab.
In “All metrics”, browse to “App/NodeJS>HTTPConnections>PerNodeId” and refresh a couple of time your CloudWatch metrics you should see this graph now:

Custom http connection metric displayed in AWS cloudwatch

If you have enabled the keep-alive header it’s likely that your browser/reverse proxy will reuse connections for performance purposes, so your total HTTP request sent could be different than the number of opened connections in Express.

Note: I’m not sure why, but custom metrics in Cloudwatch only shows data points every 5 minutes even though I push every second. I’m still looking into that :)

From there you can trigger Cloudwatch alarms and even event (auto-scaling etc.) with finer grain tuning than using the memory metric.

Happy monitoring!

Add a comment

Related posts:

Does Privacy Exists and Do We Care?

I believe a fundamental component to freedom is a right to privacy, but I am not sure privacy exists in 2018. In fact, I assume it doesn’t. I assume every email and text I write and every phone call…

Did I Give Up on My Dream of Traveling Full Time?

I assume that you are a new reader and I also assume that you do not know anything about me and my story. That is perfectly fine because you may find yourself in my story and my words that will…

Office 365 download free full version for windows 7

Big Microsoft Store Sales and Savings. Get the things you want — and need — for less. Microsoft sales give you access to incredible prices on laptops, desktops, mobile devices, software and…