OpenEndpoints is software written in Java. The code is open source and licensed under Apache License with Commons Clause - see https://openendpoints.io/license.
For the deployment we recommend using Docker. A public Docker container is available at:
By default the public Docker container will install the free XSLT processor Saxon-HE. This is sufficient for most purposes.
In order to deploy OpenEndpoints with the commercial version Saxon-PE you are required to buy a license for Saxon-PE (Professional Edition) and create a Docker container using that edition. The license isn't expensive and it's absolutely worth the money.
Here are the steps you need to take:
Purchase a Saxon-PE license at https://www.saxonica.com/shop/shop.html. You will get two files: the JAR file containing the Saxon-PE code, and also a Saxon-PE license file.
Install Java 21 and Maven on your computer if you have not already done so.
Check out the OpenEndpoints Git repo to your computer if you have not already done so.
Execute the following command to install the Saxon-PE JAR that you have purchased into your local Maven repository on your computer: mvn install:install-file -Dfile=<path to your Saxon-PE file> -DgroupId=net.sf.saxon -DartifactId=Saxon-PE -Dversion=9.7.0.18 -Dpackaging=jar
replacing your path to your downloaded file as appropriate. Keep the -Dversion the same, no matter what version you've actually downloaded.
Copy the Saxon-PE license file to your Git checkout, placing it in the path saxon-pe/saxon-license.lic
.
Execute the following command to build OpenEndpoints with Saxon-PE: mvn -DSaxon=PE clean package
Build the Docker image using a command like docker build -t endpoints-pe .
Push the Docker image to your Docker repository. Note that the terms of the Saxon-PE license do not allow you to make this Docker image public in any way.
A PostgreSQL database is required prior to installing OpenEndpoints.
Database schema creation is not necessary; the software does that on startup.
The following data is saved in it:
Request log: This saves metadata for each call, but not the data explicitly transmitted when the call is made. An exception is the debug mode, in which, as an exception, the data transferred when the call is made can also be saved.
Incremental IDs: If the feature is used to provide incremental IDs, the last ID used is saved in the database.
Forward-to-Endpoint-URL: OpenEndpoints can generate short URLs, which, when called, cause a request to a previously-specified endpoint including previously-saved request parameters.
Service Portal: Application Data and Logins.
Firstly you need to either create an AWS account, or you need to have access to an AWS account.
Decide on what internet domain the new Endpoints deployment should be reachable at. For example https://endpoints.myservice.com/
.
This can either be a domain such as myservice.com
or a subdomain such as endpoints.myservice.com
. In either case, you must own the main domain you wish to host on (myservice.com
in this example). Purchasing this domain is out-of-scope of this document.
AWS offers free HTTPS certificates for use with their own services.
In the AWS Management Console (web interface), navigate to the Certificate Manager product.
Click “Request a Certificate”
On the form that is displayed, select “Request a Public Certificate” as this will be a publicly visible service.
Type in the domain name of the certificate; this is the full domain name, from the example above this would be endpoints.myservice.com
There is no https:// part or trailing slash or dot.
Click on the “Review and Request” button, then the “Confirm and Request” button.
This will either send an email to the owner of the domain (containing a link), or it will request that a particular DNS entry is made in the DNS settings for that domain. This is to prove that you own the domain that AWS will create the HTTPS certificate for. Either click on the link in the email or set up the DNS entry. After having set up the DNS entry, a few minutes later the screen on the AWS Management Console will show the certificate has been issued.
A VPC is a part of the network managed by AWS. If this AWS account will do nothing more than serve Endpoints, you can use the default VPC. If it is shared with other services, for security we recommended creating a separate VPC for the Endpoints installation. If you decide to use a default VPC, you can skip this section. If you decide to use a separate VPC, follow the following steps to create it:
In the AWS Management Console (web interface), navigate to the VPC product.
Create a new VPC, with the appropriate IP address range, such as 10.100.0.0/16. (Ignore the options about IPv6.) You can use any internal IP address range you like, as long as it doesn't conflict with any other internal IP addresses that the VPC may need to have access to (e.g. if Endpoints needs to be configured to access any databases or services internal to your company). If in doubt, talk to the department which manages IP addresses in your company. If the VPC does not need to access any other internal services, there is no reason not to proceed with the example given here.
Create two Subnets, with IP address schemes like 10.100.10.0/26 and 10.100.10.64/26. Allow “Specify the Availability Zones” explicitly, choose different zones for each Subnet, otherwise they all get created in the same Availability Zones. This allows services to be split across multiple Availability Zones, meaning if one AWS data center encounters issues, then the application will still be available.
Create an “Internet Gateway”. This will allow the VPC to send and receive requests from the internet.
After the "Internet Gateway" has been created, select it, use the "Actions" drop-down to select "Attach to VPC" and select the VPC you created earlier.
Create a “Route Table”, then after its creation:
Click on the “Routes” tab at the bottom half of the screen. Add a rule from Destination 0.0.0.0/0 to the new Internet Gateway created above.
Click on “Subnet Association” tab and associate all the subnets with the routing table.
It is a good idea to create all security group in advance. Security groups can also be created when databases and other resources are created, however then they have useless names such as “rds-launch-wizard-2”.
On the AWS Management Console (web interface), navigate to the Security Group feature. The following table specifies which security groups to create. Each have a name and one or more inbound rules. For outbound rules, allow the default which is to be able to send requests on any port to any location.
For various tasks, it is useful to have a VM to connect to via SSH. This is "behind the firewall" and will allow you to access resources such as the database which are not public.
In the AWS Management Console (web interface), navigate to EC2 product.
If you do not already have an SSH public key then go to the left navigation under "Key Pairs" and create a new Key Pair. If you already have an SSH public key, e.g. created on your computer, you do not need to do this step, you can upload it later.
Navigate to the "Instances" section of the left-hand navigation.
Click “Create New VM”
Select the latest Ubuntu image
Select a very small instance size (to save costs)
Select the VPC created earlier
Set “Auto-assign public IP” to Enable
Set the tag “Name” to e.g. “SSH from Internet”
Select the “SSH from Internet” Security Group created earlier
Select an SSH Key pair that you have access to so that you can log on to the server.
It takes quite a while before a user can log in to a newly created EC2 instance, for example 5 minutes. You might see the error "Permission denied (publickey)" during this time.
To connect to it, use the ssh username ubuntu
together with the key you created or uploaded earlier.
Endpoints uses the database to store various things such as which Endpoints "applications" have been installed.
In the AWS Management Console (web interface), navigate to RDS product. This is the AWS managed database product.
Click “Create a new database”.
Name it something like “Endpoints”.
We currently support PostgreSQL 14 (although other versions will probably work).
Select a random master password.
Select “Create new Subnet Group”
Set “Public Access” to “No”, as this database should not be publicly visible on the internet.
Select the database Security Group that was created earlier.
Database schema creation is not necessary; software does that on startup.
“Point-in-Time Recovery” is activated by default, so no action is required to enable that.
Make sure that “Auto Minor Version Upgrade” is enabled, so that you do not have to take care of manually upgrading the database between minor versions.
A Task Definition is a blueprint of how AWS will install and run the Endpoints software.
In the AWS Management Console (web interface), navigate to the ECS product. ECS is the service AWS offers to manage Docker installations.
Create a new Task Definition
Select type "Fargate". This means that AWS itself will automatically allocate compute resources, as opposed to having to do it manually.
Name the Task Definition with a name like "Endpoints"
Select the RAM and CPU. We recommend at least 500MB RAM.
Add one container within the Task Definition. Give it a name like "endpoints".
The URL to the public Docker image of Endpoints is public.ecr.aws/x1t6d0t7/endpoints-he
Set a hard memory limit e.g. 450MB RAM.
Add a single port mapping to the container. Container port is 8080.
ENDPOINTS_JDBC_URL
is like jdbc:postgresql://xxxxx/postgres?user=postgres&password=postgres
where xxxx is the host value of the RDS database
ENDPOINTS_BASE_URL
is the URL of the service e.g. https://endpoints.myservice.com/
with a trailing slash
ENDPOINTS_SERVICE_PORTAL_ENVIRONMENT_DISPLAY_NAME
For example “live environment”. This is just text which is displayed on the Service Portal login page. In case you have multiple Endpoints installations, it is convenient to differentiate them with text on the login page.
A "Cluster" is a set of compute resources, managed by AWS, where Endpoints will run.
In the AWS Management Console (web interface), navigate to the ECS product
Create the ECS cluster (not Kubernetes Cluster)
Select “Networking only” type.
Don’t select creation of a new VPC
The Load Balancer is responsible for taking the user's HTTPS requests, and forwarding them on to the Endpoints software running on a managed ECS cluster created above.
In the AWS Management Console (web interface), navigate to the ECS product
Go to the “Load Balancers” section.
Click “Create Load Balancer”.
Select the default “Application Load Balancer” from the two options.
Change the listener to be HTTPS only.
Select the correct VPC, which was created above.
Select all subnets.
Select the HTTPS certificate that has been previously created.
Select the HTTPS security group previously created.
Go to the "Load balancer target group".
Create a new Target Group. However, its settings are not important, later it will be deleted, as each time a Docker instance is registered with it, it will create a new Target Group.
Do not register any targets to the newly created Target Group (as it will be deleted later)
This is necessary so that when someone navigates to your domain, their requests are sent to the AWS Load Balancer created above, and thus the request can be served by Endpoints.
In the AWS Management Console (web interface), go to the EC2 product.
In the Load Balancer section, click on the Load Balancer created above. You will see it has a DNS name such as Endpoints-HTTPS-66055465.eu-central-1.elb.amazonaws.com
(A Record).
In the tool where you administer the domain, create a CNAME DNS record, from the domain or subdomain chosen for this installation, to the domain name you can read off in the last step.
This step takes the Task Definition you have created earlier (which is a blueprint for running the software) and installs it on the Cluster created earlier (a set of compute resources).
In the AWS Management Console (web interface), go to the ECS product.
Navigate to ECS “Clusters” (not Kubernetes Clusters)
Select the newly created “Cluster”.
Create an service called “Endpoints”
Type is Fargate
select the Task Definition created above
Select the VPC created above
Add all subnets
Select the “webapp” security group, created above
In the Load Balancing section:
Select “Application Load Balancer”
Select the load balancer previously created in the drop-down
Click the “Add to load balancer” button
Select the target group name
Select the existing “production listener”
The URL is /*
i.e. slash followed by a star
Health check is /health-check
Set the application as "sticky" in the load balancer: (This is required for the Service Portal inc case more than one instance is running as the "state" of the web application is stored in the server memory)
Navigate to the EC2 Product in the Management Interface.
Go to the Target Group section in the left navigation
Select the previously-created Target Group
Navigate to the "Attributes" tab
Click "Edit"
Enable the "Stickiness" checkbox
Select the "Load balancer generated cookie" option.
It is possible, but not necessary, to use CloudWatch to create monitoring alerts for the health of various components such as the database. To create an alarm:
In the AWS Management Console (web interface), go to the CloudWatch product.
Click “Create Alarm”.
Set up the alarm, as described below
On the last screen, select that the action should be to send an email to the appropriate person.
Perform the above steps for each of the following alarms:
CPU: ECS -> CPUUtilization, > 70 (percentage) for 1 period (= 5 minutes)
Memory: ECS -> MemoryUtilization > 70 (percentage) for 1 period (= 5 minutes)
Up: ApplicationELB > Per AppELB, per TG Metrics -> UnHealthyHostCount, > 0 for 1 period (= 5 minutes)
DB Disk: RDB -> Per-Database Metrics -> FreeStorageSpace for digitalservices < 5 (percentage) for 1 datapoint
The application is now available under your URL e.g. https://endpoints.myservice.com/
If the application is configured in multi-application mode then the Service Portal is available under https://endpoints.myservice.com/service-portal
with username admin/admin.
Go to the ECS Cluster, go to the Service, click on the “Events” tab to see what’s going on, e.g. health check is failing
Go to CloudWatch, Log Groups, see that there is a new log group which has been created, go into the log file and see if there are any errors.
This is an example installation on . For sure there are many more paths how-to install OpenEndpoints on DigitalOcean.
Sign up to DigitalOcean or be invited to an existing account
On the left hand navigation go to “API” and “Generate New Token” with a name such as your name. Record the token somewhere for later.
Create a Project
Top-left of the navigation all projects are listed
Click on “New Project”
Enter the name “Endpoints”
For “What’s it for” answer “Operational / Developer tooling”
For “Environment” answer “Production”
Upload the Endpoints software as a Docker image to DigitalOcean Docker Repository
Go to “Container Registry” on the left hand side navigation
Use a name which is unique to your company or product; this is globally unique across all of DigitalOcean. We use “endpoints” in this example.
On your local computer (e.g. Mac): Install "doctl", see https://docs.digitalocean.com/reference/doctl/how-to/install/
doctl auth init
doctl registry login
docker pull public.ecr.aws/x1t6d0t7/endpoints-he
`docker tag public.ecr.aws/x1t6d0t7/endpoints-he registry.digitalocean.com/endpoints/endpoints
docker push registry.digitalocean.com/endpoints/endpoints
Create a database:
Click on “Databases” in left navigation
Click "create"
Choose “PostgreSQL”
Currently we recommend PostgreSQL 14 (although other versions will probably work).
We recommend starting with the cheapest version which is “Basic Node” with 1GB of RAM.
Choose your Data Center.
Under “Choose a unique database cluster name” choose something like “endpoints”.
Click the green “Create a Database Cluster” at the bottom of the screen to actually start the creation process. (The creation process takes a while, e.g. 5-10 minutes.)
After you start the creation process, the resulting screen displays information about the database, with a few tabs such as “Overview”, “Insights” etc.
Create the application:
On the left navigation click on “App Platform”
Create a new app
Select "source provider" of “DigitalOcean Container Registry”
Add the following environment variables:
JAVA_OPTIONS
is -verbose:gc
ENDPOINTS_BASE_URL
- Use whatever domain you want the service to be running under e.g. https://endpoints.mycompany.com/
ENDPOINTS_JDBC_URL
Go to the database settings screen
Go to the bottom right of the screen “Connection Details”
Select the default “Public Network”
Use a format like jdbc:postgresql://<host>:<port>/<database>?user=<username>&password=<password>
ENDPOINTS_SERVICE_PORTAL_ENVIRONMENT_DISPLAY_NAME
is e.g. DigitalOcean
Choose an app name, this is used in the internal URL but otherwise doesn’t matter much
Wait for it to deploy
See the URL and check it works
Enter a CNAME in your DNS from the URL you want to the one that DigitalOcean has supplied
Go to Settings, in “App Settings” under “Domains” add the domain you want the service to run under, so that HTTPS certificates work.
Add environment variables. This is just the minimal set to start working, there are more options, see for more details for more details.
ENDPOINTS_BASE_URL
With trailing slash, for examplehttps://endpoints.offer-ready.com/
mandatory
mandatory
ENDPOINTS_JDBC_URL
Points to a PostgreSQL database. For example jdbc:postgresql://localhost/endpoints?user=postgres&password=postgres. At this point in time PostgreSQL 10+ is supported.
mandatory
mandatory
ENDPOINTS_PUBLISHED_APPLICATION_DIRECTORY
Optional. There is usually no need to set this. A directory where published applications are stored. Only set this variable if OpenEndpoints is run outside Docker, where the checked-out applications might survive an application restart.
n/a
optional - do not use with Docker!
ENDPOINTS_SERVICE_PORTAL_ENVIRONMENT_DISPLAY_NAME
For example "Production Environment", for the login screen of the Service Portal.
n/a
mandatory
ENDPOINTS_SINGLE_APPLICATION_MODE_TIMEZONE_ID
A string timezone ID such as Europe/Berlin
. See the column “TZ database name” in the table at https://en.wikipedia.org/wiki/List_of_tz_database_time_zones for a full list. The string “UTC” may also be used.
This is used by the “On-Demand Incrementing Number” feature, which allocates unique values within, for example, a month. Exactly at what point in time the month begins/ends is influenced by this timezone. The environment variable is mandatory if the “On-Demand Incrementing Number” feature is used within the application.
This variable is only needed if the application is in single-application mode. If the application is in multi-application mode then this information is taken from the application_config
database table which is used to store which applications are available in multi-application mode.
mandatory
n/a
ENDPOINTS_CHECK_HASH
Default is "true".
If this is set to "false", no hash checking is done. This is useful when Endpoints is not world-visible, for example part of a multi-image Kubernetes Hub, etc.
optional
optional
ENDPOINTS_DISPLAY_EXPECTED_HASH
For debugging, set this to "true" in order to display the expected value of security hashes.
optional
optional
ENDPOINTS_XSLT_DEBUG_LOG
Default false. For debugging, display the input and output to the parameter transformation XSLT in the logfile, and also bodies produced by XSLT which will be sent to HTTP servers.
optional
optional
ENDPOINTS_AWS_CLOUDWATCH_METRICS_INSTANCE
Optional. If set, CloudWatch metrics are sent, and the Instance dimension of those metrics are this value. For example, if there are multiple Endpoints installations within the same AWS account, setting this environment variable to different values for each of them allows the metrics to be differentiated.
If this environment variable is not set, not CloudWatch metrics are sent, for example the application is deployed to somewhere other than AWS so no CloudWatch is available.
optional
optional
JAVA_OPTIONS
Things such as Java heap size. By default, Java takes care of assigning the right amount of memory for the Docker container. Useful values (multiple values separated by a space)
-verbose:gc This causes logs to be printed each time a GC occurs, which show the amount of memory used and reclaimed. Useful for determining if the instance needs to be given more memory, or can be given less memory in order to save money.
-Dwicket.configuration=development This causes exceptions to be output to the browser, which can be useful for debugging in a situation where there is no access to the Docker logfile. This is not recommended for Prod deployment as this can expose internal information which might be useful to attackers.
-XX:ActiveProcessorCount=2, see here: https://www.databasesandlife.com/java-docker-aws-ecs-multicore/
optional
optional
Load Balancer HTTPS
HTTPS / Source Anywhere IPv4
WebApp
All TCP / Source “Load Balancer HTTPS”
SSH from Internet
SSH / Source Anywhere IPv4
DB
PostgreSQL / Source “WebApp” PostgreSQL / Source “SSH from Internet”
The application can be deployed in two different ways.
In both scenarios
Docker is used to deploy the application.
A PostgreSQL database is required. No schema is needed, the application creates that itself the first time it runs.
We recommend using a cloud with managed Docker services, e.g. ECS on AWS, or Kubernetes.
This is the default option. You deploy the standard Docker image, you don't need to build your own.
The Service Portal is part of the installation:
Applications are published from Git using the Service Portal.
Application files are not stored inside the Docker image.
This is a special option.
With this deployment mode the application directory is stored directly inside the Docker image. You build your own Docker image with your own Application files being part of that container.
Hence, only a single application will be available with this installation.
No facility for publishing!
Note that the Service Portal will not be available with this deployment mode. There is no facility for publishing! On every change of your configuration you will have to build a new Docker container!