It's important to understand the vision and tradeoffs between OpenFaaS Edge (faasd-pro) vs. OpenFaaS on Kubernetes.
faasd is a single-node implementation of OpenFaaS.
It is supposed to be a lightweight, low-overhead, way to deploy OpenFaaS functions for functions which do not need to scale out.
It is not supposed to have multiple replicas, clustering, High Availability (HA), or auto-scaling.
faas-cli loginfaas-cli upfaas-cli listfaas-cli describefaas-cli deploy --update=true --replace=falsefaas-cli invoke --asyncfaas-cli invokefaas-cli rmfaas-cli store list/deploy/inspectfaas-cli versionfaas-cli namespacefaas-cli secretfaas-cli logsfaas-cli auth- supported for Basic Authentication and OpenFaaS Pro with OIDC and Single-sign On.
The OpenFaaS REST API is supported by faasd, learn more in the manual under "Can I get an API with that?"
faasd suits certain use-cases as mentioned in the README.md file, for those who want a solution which can scale out horizontally with minimum effort, Kubernetes or K3s is a valid option.
Which is right for you? Read a comparison in the OpenFaaS docs
Functions only support one replica for each function, so that means horizontal scaling is not available.
It can scale vertically, and this may be a suitable alternative for many use-cases. See the YAML reference for how to configure limits.
Workaround: deploy multiple, dynamically named functions scraper-1, scraper-2, scraper-3 and set up a reverse proxy rule to load balance i.e. scraper.example.com => [/function/scraper-1, /function/scraper-2, /function/scraper-3].
faasd is operates on a leaf-node/single-node model. If this is an issue for you, but you have resource constraints, you will need to use OpenFaaS CE or Pro on Kubernetes.
There are no plans to add any form of clustering or multi-node support to faasd.
See past discussion at: HA / resilience in faasd #225
What about HA and fault tolerance?
To achieve fault tolerance, you could put two faasd instances behind a load balancer or proxy, but you will need to deploy the same set of functions to each.
An alternative would be to take regular VM backups or snapshots.
When running faas-cli deploy, your old function is removed before the new one is started. This may cause a period of downtime, depending on the timeouts and grace periods you set.
Workaround: deploy uniquely named functions i.e. scraper-1 and scraper-2 with a reverse proxy rule that maps /function/scraper to the active version.
There is a very detailed chapter on troubleshooting in the eBook Serverless For Everyone Else
See the manual for how to configure a longer timeouts.
This issue appears to happen sporadically and only for some users.
If you get a non 200 HTTP code from the gateway, or caddy after installing faasd, check the logs of faasd:
sudo journalctl -u faasdIf you see the following error:
unable to dial to 10.62.0.5:8080, error: dial tcp 10.62.0.5:8080: connect: no route to host
Restart the faasd service with:
sudo systemctl restart faasdShould have:
- Restart any of the containers in docker-compose.yaml if they crash.
- Asynchronous function deployment and deletion (currently synchronous/blocking)
Nice to Have:
- Live rolling-updates, with zero downtime (may require using IDs instead of names for function containers)
- Apply a total memory limit for the host (if a node has 1GB of RAM, don't allow more than 1GB of RAM to be specified in the limits field)
- Terraform for AWS EC2
Won't have:
- Clustering
- Multiple replicas per function
- Docs or examples on how to use the various event connectors (Yes in the eBook)
- Resolve core services from functions by populating/sharing
/etc/hostsbetweenfaasdandfaasd-provider - Provide a cloud-init configuration for faasd bootstrap
- Configure core services from a docker-compose.yaml file
- Store and fetch logs from the journal
- Add support for using container images in third-party public registries
- Add support for using container images in private third-party registries
- Provide a cloud-config.txt file for automated deployments of
faasd - Inject / manage IPs between core components for service to service communication - i.e. so Prometheus can scrape the OpenFaaS gateway - done via
/etc/hostsmount - Add queue-worker and NATS
- Create faasd.service and faasd-provider.service
- Self-install / create systemd service via
faasd install - Restart containers upon restart of faasd
- Clear / remove containers and tasks with SIGTERM / SIGINT
- Determine arm64 containers to run for gateway
- Configure
basic_authto protect the OpenFaaS gateway and faasd-provider HTTP API - Setup custom working directory for faasd
/var/lib/faasd/ - Use CNI to create network namespaces and adapters
- Optionally expose core services from the docker-compose.yaml file, locally or to all adapters.
-
containerd can't pull image from Github Docker Package Registryghcr.io support - Provide simple Caddyfile example in the README showing how to expose the faasd proxy on port 80/443 with TLS
- Annotation support
- Hard memory limits for functions
- Terraform for DigitalOcean
- Store and retrieve annotations in function spec - in progress
- An installer for faasd and dependencies - runc, containerd
- Offer a recommendation or implement a strategy for faasd replication/HA
- Remove / deprecate armhf / armv7 support
- Add support for CPU/RAM metrics in the UI, CLI and API
- Network segmentation (functions cannot talk to each other or the host)