Aktueller Stand
This commit is contained in:
34
backend/node_modules/fastify/docs/Guides/Recommendations.md
generated
vendored
34
backend/node_modules/fastify/docs/Guides/Recommendations.md
generated
vendored
@@ -212,17 +212,19 @@ server {
|
||||
# server group via port 3000.
|
||||
server {
|
||||
# This listen directive asks NGINX to accept requests
|
||||
# coming to any address, port 443, with SSL, and HTTP/2
|
||||
# if possible.
|
||||
listen 443 ssl http2 default_server;
|
||||
listen [::]:443 ssl http2 default_server;
|
||||
# coming to any address, port 443, with SSL.
|
||||
listen 443 ssl default_server;
|
||||
listen [::]:443 ssl default_server;
|
||||
|
||||
# With a server_name directive you can also ask NGINX to
|
||||
# use this server block only with matching server name(s)
|
||||
# listen 443 ssl http2;
|
||||
# listen [::]:443 ssl http2;
|
||||
# listen 443 ssl;
|
||||
# listen [::]:443 ssl;
|
||||
# server_name example.tld;
|
||||
|
||||
# Enable HTTP/2 support
|
||||
http2 on;
|
||||
|
||||
# Your SSL/TLS certificate (chain) and secret key in the PEM format
|
||||
ssl_certificate /path/to/fullchain.pem;
|
||||
ssl_certificate_key /path/to/private.pem;
|
||||
@@ -283,7 +285,7 @@ server {
|
||||
## Kubernetes
|
||||
<a id="kubernetes"></a>
|
||||
|
||||
The `readinessProbe` uses [(by
|
||||
The `readinessProbe` uses ([by
|
||||
default](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes))
|
||||
the pod IP as the hostname. Fastify listens on `127.0.0.1` by default. The probe
|
||||
will not be able to reach the application in this case. To make it work,
|
||||
@@ -305,22 +307,22 @@ readinessProbe:
|
||||
## Capacity Planning For Production
|
||||
<a id="capacity"></a>
|
||||
|
||||
In order to rightsize the production environment for your Fastify application,
|
||||
it is highly recommended that you perform your own measurements against
|
||||
In order to rightsize the production environment for your Fastify application,
|
||||
it is highly recommended that you perform your own measurements against
|
||||
different configurations of the environment, which may
|
||||
use real CPU cores, virtual CPU cores (vCPU), or even fractional
|
||||
vCPU cores. We will use the term vCPU throughout this
|
||||
recommendation to represent any CPU type.
|
||||
|
||||
Tools such as [k6](https://github.com/grafana/k6)
|
||||
Tools such as [k6](https://github.com/grafana/k6)
|
||||
or [autocannon](https://github.com/mcollina/autocannon) can be used for
|
||||
conducting the necessary performance tests.
|
||||
|
||||
That said, you may also consider the following as a rule of thumb:
|
||||
|
||||
* To have the lowest possible latency, 2 vCPU are recommended per app
|
||||
instance (e.g., a k8s pod). The second vCPU will mostly be used by the
|
||||
garbage collector (GC) and libuv threadpool. This will minimize the latency
|
||||
* To have the lowest possible latency, 2 vCPU are recommended per app
|
||||
instance (e.g., a k8s pod). The second vCPU will mostly be used by the
|
||||
garbage collector (GC) and libuv threadpool. This will minimize the latency
|
||||
for your users, as well as the memory usage, as the GC will be run more
|
||||
frequently. Also, the main thread won't have to stop to let the GC run.
|
||||
|
||||
@@ -328,7 +330,7 @@ frequently. Also, the main thread won't have to stop to let the GC run.
|
||||
requests per second per vCPU available), consider using a smaller amount of vCPUs
|
||||
per app instance. It is totally fine to run Node.js applications with 1 vCPU.
|
||||
|
||||
* You may experiment with an even smaller amount of vCPU, which may provide
|
||||
* You may experiment with an even smaller amount of vCPU, which may provide
|
||||
even better throughput in certain use-cases. There are reports of API gateway
|
||||
solutions working well with 100m-200m vCPU in Kubernetes.
|
||||
|
||||
@@ -345,7 +347,7 @@ would be exposing metrics endpoints on a separate port,
|
||||
to prevent public access, when using a reverse proxy or an ingress
|
||||
firewall is not an option.
|
||||
|
||||
It is perfectly fine to spin up several Fastify instances within the same
|
||||
Node.js process and run them concurrently, even in high load systems.
|
||||
It is perfectly fine to spin up several Fastify instances within the same
|
||||
Node.js process and run them concurrently, even in high load systems.
|
||||
Each Fastify instance only generates as much load as the traffic it receives,
|
||||
plus the memory used for that Fastify instance.
|
||||
|
||||
Reference in New Issue
Block a user