Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
OpenEndpoints is software written in Java. The code is open source and licensed under Apache License with Commons Clause - see https://openendpoints.io/license.
For the deployment we recommend using Docker. A public Docker container is available at:
By default the public Docker container will install the free XSLT processor Saxon-HE. This is sufficient for most purposes.
In order to deploy OpenEndpoints with the commercial version Saxon-PE you are required to buy a license for Saxon-PE (Professional Edition) and create a Docker container using that edition. The license isn't expensive and it's absolutely worth the money.
Here are the steps you need to take:
Purchase a Saxon-PE license at https://www.saxonica.com/shop/shop.html. You will get two files: the JAR file containing the Saxon-PE code, and also a Saxon-PE license file.
Install Java 21 and Maven on your computer if you have not already done so.
Check out the OpenEndpoints Git repo to your computer if you have not already done so.
Execute the following command to install the Saxon-PE JAR that you have purchased into your local Maven repository on your computer: mvn install:install-file -Dfile=<path to your Saxon-PE file> -DgroupId=net.sf.saxon -DartifactId=Saxon-PE -Dversion=9.7.0.18 -Dpackaging=jar
replacing your path to your downloaded file as appropriate. Keep the -Dversion the same, no matter what version you've actually downloaded.
Copy the Saxon-PE license file to your Git checkout, placing it in the path saxon-pe/saxon-license.lic
.
Execute the following command to build OpenEndpoints with Saxon-PE: mvn -DSaxon=PE clean package
Build the Docker image using a command like docker build -t endpoints-pe .
Push the Docker image to your Docker repository. Note that the terms of the Saxon-PE license do not allow you to make this Docker image public in any way.
The application can be deployed in two different ways.
In both scenarios
Docker is used to deploy the application.
A PostgreSQL database is required. No schema is needed, the application creates that itself the first time it runs.
We recommend using a cloud with managed Docker services, e.g. ECS on AWS, or Kubernetes.
This is the default option. You deploy the standard Docker image, you don't need to build your own.
The Service Portal is part of the installation:
Applications are published from Git using the Service Portal.
Application files are not stored inside the Docker image.
This is a special option.
With this deployment mode the application directory is stored directly inside the Docker image. You build your own Docker image with your own Application files being part of that container.
Hence, only a single application will be available with this installation.
No facility for publishing!
Note that the Service Portal will not be available with this deployment mode. There is no facility for publishing! On every change of your configuration you will have to build a new Docker container!
A PostgreSQL database is required prior to installing OpenEndpoints.
Database schema creation is not necessary; the software does that on startup.
The following data is saved in it:
Request log: This saves metadata for each call, but not the data explicitly transmitted when the call is made. An exception is the debug mode, in which, as an exception, the data transferred when the call is made can also be saved.
Incremental IDs: If the feature is used to provide incremental IDs, the last ID used is saved in the database.
Forward-to-Endpoint-URL: OpenEndpoints can generate short URLs, which, when called, cause a request to a previously-specified endpoint including previously-saved request parameters.
Service Portal: Application Data and Logins.
An application comprises of multiple endpoints. Each endpoint has a name which is used in the URL. Which endpoint is requested is specified when the user calls the application.
Endpoint URL
https://{base-url}/{application}/{endpoint}
OpenEndpoints knows 2 types of data sources that can serve as input for the content transformation:
data that is transferred with the request, and
data that is loaded from other sources when the endpoint is executed in the background.
The following methods are available to transfer data with the request:
GET Request with parameters URL-encoded
POST Request containing parameters. This is the default when an HTML <form> is used. The default content type is application/x-www-form-urlencoded
. Use multipart/form-data
to add any number of file uploads.
POST Request with application type application/xml
: In this case arbitrary XML is supplied (in the request body), which is passed to the parameter-transformation-input structure, inside the <input-from-request>
element instead of the normal <parameter>
elements.
POST Request with application type application/json
: In this case arbitrary JSON is supplied (in the request body), which is converted to XML and passed to the parameter-transformation-input structure, inside the <input-from-request>
element instead of the normal <parameter>
elements. Note:
Any characters which would be illegal in XML (for example element name starting with a digit) replaced by _xxxx_
containing their hex unicode character code.
Note that if any JSON objects have a key _content
, then a single XML element is created, with the value of that _content
key as the text body, and other keys from the JSON object being attributes on the resulting XML element.
There are the following special request parameters:
How to supply special parameters
Special parameters are supplied along with the normal parameters, apart from in the case of a POST request application/xml
or application/json
in which case these special parameters are passed as GET parameters
Firstly you need to either create an AWS account, or you need to have access to an AWS account.
Decide on what internet domain the new Endpoints deployment should be reachable at. For example https://endpoints.myservice.com/
.
This can either be a domain such as myservice.com
or a subdomain such as endpoints.myservice.com
. In either case, you must own the main domain you wish to host on (myservice.com
in this example). Purchasing this domain is out-of-scope of this document.
AWS offers free HTTPS certificates for use with their own services.
In the AWS Management Console (web interface), navigate to the Certificate Manager product.
Click “Request a Certificate”
On the form that is displayed, select “Request a Public Certificate” as this will be a publicly visible service.
Type in the domain name of the certificate; this is the full domain name, from the example above this would be endpoints.myservice.com
There is no https:// part or trailing slash or dot.
Click on the “Review and Request” button, then the “Confirm and Request” button.
This will either send an email to the owner of the domain (containing a link), or it will request that a particular DNS entry is made in the DNS settings for that domain. This is to prove that you own the domain that AWS will create the HTTPS certificate for. Either click on the link in the email or set up the DNS entry. After having set up the DNS entry, a few minutes later the screen on the AWS Management Console will show the certificate has been issued.
A VPC is a part of the network managed by AWS. If this AWS account will do nothing more than serve Endpoints, you can use the default VPC. If it is shared with other services, for security we recommended creating a separate VPC for the Endpoints installation. If you decide to use a default VPC, you can skip this section. If you decide to use a separate VPC, follow the following steps to create it:
In the AWS Management Console (web interface), navigate to the VPC product.
Create a new VPC, with the appropriate IP address range, such as 10.100.0.0/16. (Ignore the options about IPv6.) You can use any internal IP address range you like, as long as it doesn't conflict with any other internal IP addresses that the VPC may need to have access to (e.g. if Endpoints needs to be configured to access any databases or services internal to your company). If in doubt, talk to the department which manages IP addresses in your company. If the VPC does not need to access any other internal services, there is no reason not to proceed with the example given here.
Create two Subnets, with IP address schemes like 10.100.10.0/26 and 10.100.10.64/26. Allow “Specify the Availability Zones” explicitly, choose different zones for each Subnet, otherwise they all get created in the same Availability Zones. This allows services to be split across multiple Availability Zones, meaning if one AWS data center encounters issues, then the application will still be available.
Create an “Internet Gateway”. This will allow the VPC to send and receive requests from the internet.
After the "Internet Gateway" has been created, select it, use the "Actions" drop-down to select "Attach to VPC" and select the VPC you created earlier.
Create a “Route Table”, then after its creation:
Click on the “Routes” tab at the bottom half of the screen. Add a rule from Destination 0.0.0.0/0 to the new Internet Gateway created above.
Click on “Subnet Association” tab and associate all the subnets with the routing table.
It is a good idea to create all security group in advance. Security groups can also be created when databases and other resources are created, however then they have useless names such as “rds-launch-wizard-2”.
On the AWS Management Console (web interface), navigate to the Security Group feature. The following table specifies which security groups to create. Each have a name and one or more inbound rules. For outbound rules, allow the default which is to be able to send requests on any port to any location.
For various tasks, it is useful to have a VM to connect to via SSH. This is "behind the firewall" and will allow you to access resources such as the database which are not public.
In the AWS Management Console (web interface), navigate to EC2 product.
If you do not already have an SSH public key then go to the left navigation under "Key Pairs" and create a new Key Pair. If you already have an SSH public key, e.g. created on your computer, you do not need to do this step, you can upload it later.
Navigate to the "Instances" section of the left-hand navigation.
Click “Create New VM”
Select the latest Ubuntu image
Select a very small instance size (to save costs)
Select the VPC created earlier
Set “Auto-assign public IP” to Enable
Set the tag “Name” to e.g. “SSH from Internet”
Select the “SSH from Internet” Security Group created earlier
Select an SSH Key pair that you have access to so that you can log on to the server.
It takes quite a while before a user can log in to a newly created EC2 instance, for example 5 minutes. You might see the error "Permission denied (publickey)" during this time.
To connect to it, use the ssh username ubuntu
together with the key you created or uploaded earlier.
Endpoints uses the database to store various things such as which Endpoints "applications" have been installed.
In the AWS Management Console (web interface), navigate to RDS product. This is the AWS managed database product.
Click “Create a new database”.
Name it something like “Endpoints”.
We currently support PostgreSQL 14 (although other versions will probably work).
Select a random master password.
Select “Create new Subnet Group”
Set “Public Access” to “No”, as this database should not be publicly visible on the internet.
Select the database Security Group that was created earlier.
Database schema creation is not necessary; software does that on startup.
“Point-in-Time Recovery” is activated by default, so no action is required to enable that.
Make sure that “Auto Minor Version Upgrade” is enabled, so that you do not have to take care of manually upgrading the database between minor versions.
A Task Definition is a blueprint of how AWS will install and run the Endpoints software.
In the AWS Management Console (web interface), navigate to the ECS product. ECS is the service AWS offers to manage Docker installations.
Create a new Task Definition
Select type "Fargate". This means that AWS itself will automatically allocate compute resources, as opposed to having to do it manually.
Name the Task Definition with a name like "Endpoints"
Select the RAM and CPU. We recommend at least 500MB RAM.
Add one container within the Task Definition. Give it a name like "endpoints".
The URL to the public Docker image of Endpoints is public.ecr.aws/x1t6d0t7/endpoints-he
Set a hard memory limit e.g. 450MB RAM.
Add a single port mapping to the container. Container port is 8080.
ENDPOINTS_JDBC_URL
is like jdbc:postgresql://xxxxx/postgres?user=postgres&password=postgres
where xxxx is the host value of the RDS database
ENDPOINTS_BASE_URL
is the URL of the service e.g. https://endpoints.myservice.com/
with a trailing slash
ENDPOINTS_SERVICE_PORTAL_ENVIRONMENT_DISPLAY_NAME
For example “live environment”. This is just text which is displayed on the Service Portal login page. In case you have multiple Endpoints installations, it is convenient to differentiate them with text on the login page.
A "Cluster" is a set of compute resources, managed by AWS, where Endpoints will run.
In the AWS Management Console (web interface), navigate to the ECS product
Create the ECS cluster (not Kubernetes Cluster)
Select “Networking only” type.
Don’t select creation of a new VPC
The Load Balancer is responsible for taking the user's HTTPS requests, and forwarding them on to the Endpoints software running on a managed ECS cluster created above.
In the AWS Management Console (web interface), navigate to the ECS product
Go to the “Load Balancers” section.
Click “Create Load Balancer”.
Select the default “Application Load Balancer” from the two options.
Change the listener to be HTTPS only.
Select the correct VPC, which was created above.
Select all subnets.
Select the HTTPS certificate that has been previously created.
Select the HTTPS security group previously created.
Go to the "Load balancer target group".
Create a new Target Group. However, its settings are not important, later it will be deleted, as each time a Docker instance is registered with it, it will create a new Target Group.
Do not register any targets to the newly created Target Group (as it will be deleted later)
This is necessary so that when someone navigates to your domain, their requests are sent to the AWS Load Balancer created above, and thus the request can be served by Endpoints.
In the AWS Management Console (web interface), go to the EC2 product.
In the Load Balancer section, click on the Load Balancer created above. You will see it has a DNS name such as Endpoints-HTTPS-66055465.eu-central-1.elb.amazonaws.com
(A Record).
In the tool where you administer the domain, create a CNAME DNS record, from the domain or subdomain chosen for this installation, to the domain name you can read off in the last step.
This step takes the Task Definition you have created earlier (which is a blueprint for running the software) and installs it on the Cluster created earlier (a set of compute resources).
In the AWS Management Console (web interface), go to the ECS product.
Navigate to ECS “Clusters” (not Kubernetes Clusters)
Select the newly created “Cluster”.
Create an service called “Endpoints”
Type is Fargate
select the Task Definition created above
Select the VPC created above
Add all subnets
Select the “webapp” security group, created above
In the Load Balancing section:
Select “Application Load Balancer”
Select the load balancer previously created in the drop-down
Click the “Add to load balancer” button
Select the target group name
Select the existing “production listener”
The URL is /*
i.e. slash followed by a star
Health check is /health-check
Set the application as "sticky" in the load balancer: (This is required for the Service Portal inc case more than one instance is running as the "state" of the web application is stored in the server memory)
Navigate to the EC2 Product in the Management Interface.
Go to the Target Group section in the left navigation
Select the previously-created Target Group
Navigate to the "Attributes" tab
Click "Edit"
Enable the "Stickiness" checkbox
Select the "Load balancer generated cookie" option.
It is possible, but not necessary, to use CloudWatch to create monitoring alerts for the health of various components such as the database. To create an alarm:
In the AWS Management Console (web interface), go to the CloudWatch product.
Click “Create Alarm”.
Set up the alarm, as described below
On the last screen, select that the action should be to send an email to the appropriate person.
Perform the above steps for each of the following alarms:
CPU: ECS -> CPUUtilization, > 70 (percentage) for 1 period (= 5 minutes)
Memory: ECS -> MemoryUtilization > 70 (percentage) for 1 period (= 5 minutes)
Up: ApplicationELB > Per AppELB, per TG Metrics -> UnHealthyHostCount, > 0 for 1 period (= 5 minutes)
DB Disk: RDB -> Per-Database Metrics -> FreeStorageSpace for digitalservices < 5 (percentage) for 1 datapoint
The application is now available under your URL e.g. https://endpoints.myservice.com/
If the application is configured in multi-application mode then the Service Portal is available under https://endpoints.myservice.com/service-portal
with username admin/admin.
Go to the ECS Cluster, go to the Service, click on the “Events” tab to see what’s going on, e.g. health check is failing
Go to CloudWatch, Log Groups, see that there is a new log group which has been created, go into the log file and see if there are any errors.
The term ”application” is used to describe a piece of configuration. OpenEndpoints can serve multiple applications simultaneously. You can create different projects with different application directories and run them in parallel.
An application is configured with a directory of files - mainly XML files. The structure of the directory is explained in detail here: .
OpenEndpoints pulls the application directory from a Git. When publishing, the last version is pulled from the Git and your configuration is updated.
Click Add New Application
to create your first application.
This will open the screen to add new or edit existing applications:
Choose an application name, which will become part of the endpoints URL
After successfully adding a new application you will find your new application in the "Change Application" Screen:
OpenEndpoints installed in "Multi Application Mode" (see ) comes with a service portal for administration:
https://[your-base-url]/service-portal
Note that in "Single Application Mode" your application is already packed with the Docker container.
A default account for the admin is automatically created during installation:
Security Alert
Don't forget to set a new password as soon as possible!
The process to publish an application requires 2 steps:
Commit changes in your application directory to Git
Use the "Publisher" to pull the latest revision of your application into the software.
Open the application you want to publish:
This option will:
Load the latest available revision from your Git repository
Check the new configuration for consistency and errors. If an error is found, the new version will not overwrite any existing status. A detailed description of the error is displayed.
On success the new revision will replace the previous configuration.
Promote Preview to Live
This option will not pull a new version from the Git, but simply copy the existing configuration from Preview to Live. No further check of consistency is required, as this had been done already when publishing that version to Preview.
This software takes XML from multiple sources (e.g. URLs, files, or any Java class you write which can produce XML). You configure what sources you want, and the software combines the XML from those sources into one XML file in memory. The XML is then optionally transformed using XSLT. The resulting document is then optionally converted into PDF, XLS or JSON. The resulting file is then either downloaded to the browser, or a sent as an email, or you can write a Java class which performs any action on the file, or any combination of the above.
A documentation of our software is available here: https://openendpoints.gitbook.io/
We would like to thank GITBOOK for providing us with their great product for free.
Further documentation is available in the LyX doc . Download LyX for free for Windows/Mac/Linux in order to read and contribute to this file.
This software was originally written by and commissioned by . At the time of writing, Offer-Ready IT-Services & Consulting GmbH still maintain the project.
Contributions are welcome. Please open an issue describing what you wish to achieve. We will be able to help you with advice, before you invest the time of development. When you've developed your patch, please submit a pull request using github.
This is an example installation on . For sure there are many more paths how-to install OpenEndpoints on DigitalOcean.
Sign up to DigitalOcean or be invited to an existing account
On the left hand navigation go to “API” and “Generate New Token” with a name such as your name. Record the token somewhere for later.
Create a Project
Top-left of the navigation all projects are listed
Click on “New Project”
Enter the name “Endpoints”
For “What’s it for” answer “Operational / Developer tooling”
For “Environment” answer “Production”
Upload the Endpoints software as a Docker image to DigitalOcean Docker Repository
Go to “Container Registry” on the left hand side navigation
Use a name which is unique to your company or product; this is globally unique across all of DigitalOcean. We use “endpoints” in this example.
On your local computer (e.g. Mac): Install "doctl", see https://docs.digitalocean.com/reference/doctl/how-to/install/
doctl auth init
doctl registry login
docker pull public.ecr.aws/x1t6d0t7/endpoints-he
`docker tag public.ecr.aws/x1t6d0t7/endpoints-he registry.digitalocean.com/endpoints/endpoints
docker push registry.digitalocean.com/endpoints/endpoints
Create a database:
Click on “Databases” in left navigation
Click "create"
Choose “PostgreSQL”
Currently we recommend PostgreSQL 14 (although other versions will probably work).
We recommend starting with the cheapest version which is “Basic Node” with 1GB of RAM.
Choose your Data Center.
Under “Choose a unique database cluster name” choose something like “endpoints”.
Click the green “Create a Database Cluster” at the bottom of the screen to actually start the creation process. (The creation process takes a while, e.g. 5-10 minutes.)
After you start the creation process, the resulting screen displays information about the database, with a few tabs such as “Overview”, “Insights” etc.
Create the application:
On the left navigation click on “App Platform”
Create a new app
Select "source provider" of “DigitalOcean Container Registry”
Add the following environment variables:
JAVA_OPTIONS
is -verbose:gc
ENDPOINTS_BASE_URL
- Use whatever domain you want the service to be running under e.g. https://endpoints.mycompany.com/
ENDPOINTS_JDBC_URL
Go to the database settings screen
Go to the bottom right of the screen “Connection Details”
Select the default “Public Network”
Use a format like jdbc:postgresql://<host>:<port>/<database>?user=<username>&password=<password>
ENDPOINTS_SERVICE_PORTAL_ENVIRONMENT_DISPLAY_NAME
is e.g. DigitalOcean
Choose an app name, this is used in the internal URL but otherwise doesn’t matter much
Wait for it to deploy
See the URL and check it works
Enter a CNAME in your DNS from the URL you want to the one that DigitalOcean has supplied
Go to Settings, in “App Settings” under “Domains” add the domain you want the service to run under, so that HTTPS certificates work.
A simple web form to send data to OpenEndpoints could look like this:
The form calls your endpoint, which might for example
The behaviour of your web form after pressing the submit button depends on both the actions defined in your endpoint (success and error actions), and the html code in your web form.
On success (but not on error) you may use parameter placeholders. For example, you could add tracking information that is executed by your success page.
Omitting a particular action on success or failure will simply return a status code of 200 for success or 400 for any type of failure.
To output JSON, the XSLT should use the output method "text":
The transformer should explicitly define the content type "application/json":
The name attributes in the web form can be completely different from the names of the endpoint parameters.
The parameter-transformation-xslt
may only output parameter values of parameters that are declared in the endpoints.xml
file. There are no restrictions as to how inputs from the web form are mapped with this output.
The output generated by your XSLT requires values for any parameter not having a default value. Not doing so will raise an error.
Input from the web form is passed to endpoint parameters only if the name attribute in the web form matches the name attribute of the endpoint parameter. Note that names are case sensitive.
Parameters without a default value must have an equivalent in the web form. Only parameters for which there is a default value are optional.
Inputs from web forms for which there is no corresponding parameter are silently ignored.
Potential Source of Error
OpenEndpoints parameter names are case sensitive. For example, if the name of the parameter is "firstname", then it will not match with "Firstname".
After you've published your application, you probably want to test your endpoint :-)
Navigate to Calculate Hash in the main navigation.
Select your endpoint and the environment.
In case your endpoint has "include-in-hash" parameters, you will be prompted to enter values for those parameters. See: .
Press button Calculate Hash and copy the calculated hash value.
Navigate to Home in the main navigation.
Copy Live URL or Preview URL - depending on what environment you would like to use. The URL should look like this: https://endpoints.openendpoints.com/foo/{endpoint}?{parameter}
Replace {endpoint}
with the name of your endpoint
Replace {parameter}
with: hash=[the-calculated-hash]
If your endpoint has "include-in-hash" parameters, those parameters (and the values as used for calculating the hash) shall be added as additional GET parameters.
Copy the link into your browser.
Navigate to Request-Log in the main navigation. You will find your request, including additional information in case an error has occurred.
Add environment variables. This is just the minimal set to start working, there are more options, see for more details for more details.
The Display Name is used in the user interface and will also be available as an input-from-application in .
The Git URL shall point to the root folder of your .
Navigate to "Publish" in the main navigation. The Publisher lets you publish the latest version of your Git to Preview or Live Environment - see: .
use an to forward an email to your back office, and also send a custom message to the email-address submitted with the form.
use an to send data to your CRM.
Configure your endpoint to redirect the request to a success or error page - see . Note that the absolute path to your page must be used, not a relative path.
If you want to return a user-defined JSON, then use a data source transformation to create that JSON using XSLT. For example, the JSON could contain some ID returned from your CRM (see: ):
Use an :
In case you use an having a download-filename
attribute, the user will stay on the form, while the document is downloaded:
The form entries are passed to the endpoint parameters. Depending on whether these parameters have a default value and whether a is used, different things must be observed:
hash
mandatory
calculated sha-256 hash - see Authentication
environment
optional
preview
or live
(default) - see Environments
debug
optional
true
or false
(default) - see Debug Mode
Load Balancer HTTPS
HTTPS / Source Anywhere IPv4
WebApp
All TCP / Source “Load Balancer HTTPS”
SSH from Internet
SSH / Source Anywhere IPv4
DB
PostgreSQL / Source “WebApp” PostgreSQL / Source “SSH from Internet”
ENDPOINTS_BASE_URL
With trailing slash, for examplehttps://endpoints.offer-ready.com/
mandatory
mandatory
ENDPOINTS_JDBC_URL
Points to a PostgreSQL database. For example jdbc:postgresql://localhost/endpoints?user=postgres&password=postgres. At this point in time PostgreSQL 10+ is supported.
mandatory
mandatory
ENDPOINTS_PUBLISHED_APPLICATION_DIRECTORY
Optional. There is usually no need to set this. A directory where published applications are stored. Only set this variable if OpenEndpoints is run outside Docker, where the checked-out applications might survive an application restart.
n/a
optional - do not use with Docker!
ENDPOINTS_SERVICE_PORTAL_ENVIRONMENT_DISPLAY_NAME
For example "Production Environment", for the login screen of the Service Portal.
n/a
mandatory
ENDPOINTS_SINGLE_APPLICATION_MODE_TIMEZONE_ID
A string timezone ID such as Europe/Berlin
. See the column “TZ database name” in the table at https://en.wikipedia.org/wiki/List_of_tz_database_time_zones for a full list. The string “UTC” may also be used.
This is used by the “On-Demand Incrementing Number” feature, which allocates unique values within, for example, a month. Exactly at what point in time the month begins/ends is influenced by this timezone. The environment variable is mandatory if the “On-Demand Incrementing Number” feature is used within the application.
This variable is only needed if the application is in single-application mode. If the application is in multi-application mode then this information is taken from the application_config
database table which is used to store which applications are available in multi-application mode.
mandatory
n/a
ENDPOINTS_CHECK_HASH
Default is "true".
If this is set to "false", no hash checking is done. This is useful when Endpoints is not world-visible, for example part of a multi-image Kubernetes Hub, etc.
optional
optional
ENDPOINTS_DISPLAY_EXPECTED_HASH
For debugging, set this to "true" in order to display the expected value of security hashes.
optional
optional
ENDPOINTS_XSLT_DEBUG_LOG
Default false. For debugging, display the input and output to the parameter transformation XSLT in the logfile, and also bodies produced by XSLT which will be sent to HTTP servers.
optional
optional
ENDPOINTS_AWS_CLOUDWATCH_METRICS_INSTANCE
Optional. If set, CloudWatch metrics are sent, and the Instance dimension of those metrics are this value. For example, if there are multiple Endpoints installations within the same AWS account, setting this environment variable to different values for each of them allows the metrics to be differentiated.
If this environment variable is not set, not CloudWatch metrics are sent, for example the application is deployed to somewhere other than AWS so no CloudWatch is available.
optional
optional
JAVA_OPTIONS
Things such as Java heap size. By default, Java takes care of assigning the right amount of memory for the Docker container. Useful values (multiple values separated by a space)
-verbose:gc This causes logs to be printed each time a GC occurs, which show the amount of memory used and reclaimed. Useful for determining if the instance needs to be given more memory, or can be given less memory in order to save money.
-Dwicket.configuration=development This causes exceptions to be output to the browser, which can be useful for debugging in a situation where there is no access to the Docker logfile. This is not recommended for Prod deployment as this can expose internal information which might be useful to attackers.
-XX:ActiveProcessorCount=2, see here: https://www.databasesandlife.com/java-docker-aws-ecs-multicore/
optional
optional
Even on a static website there is always content that changes regularly and can be loaded from a database or a web service, for example. OpenEndpoints is perfect for converting external content into HTML, which can be seamlessly integrated into your own website.
Build an Endpoint to Return XSLT Transformation. Do not use the download-filename
attribute in this case.
The generated HTML is inserted directly into the <div>. Therefore, the HTML (generated with XSLT) should not contain a <head> or <body> tag, but actually only the content that is to be inserted!
Set content-type "text/html" in the transformer:
Java Script can be used to load the html returned from the endpoint into a <div>
.
In this example the script was taken from https://www.w3schools.com/lib/w3.js.
Each request to OpenEndpoints requires a mandatory hash parameter:
The hash is a SHA-256 hex calculated from several input values. On each request the server calculates the expected value of the user-supplied hash parameter. If the supplied value does not match the expected value the request will be denied.
The hash may be supplied as uppercase or lowercase in the request.
In the security.xml
file under application you have to create one or many secret keys.
Any secret key in security.xml
will be a valid input to determine the expected calculated hash. The advantage of having more than one secret key is that you can implement a rotation of secret-keys without interruption of services:
Add a new secret key
Adopt new hash values in your applications or web forms (using the new secret key)
Delete the old secret key
Input of the SHA256 function is the concatenated string of the following inputs:
Name of the endpoint
The values of all the parameters listed in the <include-in-hash>
block of the endpoint, in the order in which they are listed there. (If there is no <include-in-hash>
section, or there is but it's empty, then no parameters are added to the hash's source string for this step.)
Environment name (either “live” or “preview”)
Any secret key from the security.xml file
Potential Source of Error
Note that if you use parameter-transformation the parameter value taken for the calculation of <include-in-hash> commands is the value after transformation, not the originally submitted value.
Assume that
Expected Value LIVE environment
Input String = "helloworld" & "abc" & "def" & "live" & "openendpoints"
SHA256("helloworldabcdefliveopenendpoints")
= 82bb6e7f675a8d872688cb593a64f615b37f88478d7fed8705496d3e7a1c2699
Expected Value PREVIEW environment
SHA256("helloworldabcdefpreviewopenendpoints")
= 4afcbe21891e5be6762f495958659a25950a83e7c52f13594cbebe43cfdd9bf4
The possibility to include parameter values in the hash calculation can be used to implement use cases like:
Send a link to web form having some distinct mandatory input which shall not be changed. Changing the value of that parameter would result in a different expected hash value. Hence, the form will only work with the (unchanged) value.
Having the same endpoint implemented for different web forms or different applications could use a hidden parameter indicating the originating source. The expected hash can be different for each originator, making it impossible to modify that value.
Generate the hash in combination with a timestamp. You can invalidate the request after a certain time - and the timestamp can not be modified because this would change the expected hash value.
HTML web forms allow to send files along with the form data:
OpenEndpoints automatically understands the correct enctype
to separate uploaded content from form <input>
data:
Web form with no file upload
application/x-www-form-urlencoded
Web form with one or many file uploads
multipart/form-data
Once the file(s) have been uploaded by the user, the following options exist to do things with these files:
The Email Task allows to add all uploaded files as attachments:
That means: It is not possible to select which file should be attached as an attachment. All files are always attached.
When sending an HTTP request from OpenEndpoints using the HttpRequest Task , and when specifying that HTTP request should contain an XML body, and when that XML is specified inline, the optional attribute upload-files="true"
may be set.
For example:
This syntax indicates that the element <foo>
will be filled with the contents of the uploaded file with the filename "xyz". The encoding is always base64, no other encodings are supported.
A distinction must be made between two technically different topics:
The web form containing multiple controls having the same name
attribute.
The web form containing a single control, but which allows multiple values to be selected.
According to w3c standard HTML forms can have multiple controls sharing the same name attribute. See: https://www.w3.org/TR/html52/sec-forms.html
For example:
OpenEndpoints will combine all values submitted into a single input-from-request parameter, separated by a multiple-value-separator. The default separator is a double pipe ||
.
The above example would result in a parameter "topping" that has the following value:
It is possible to define any other multiple-value-separator. See: Multiple parameters supplied with the same name.
Some HTML form controls allow to submit multiple values for a single control, for example:
On selecting multiple values, this would be submitted by the user agent in the same way as having multiple controls with the same name attribute. The result will be the same as above in OpenEndpoints:
It is possible to upload several files at the same time:
The optional xml-from-application
directory contains custom xml files that may be used as a data-source. There are no restrictions on how these files should be structured.
The idea behind locally stored content is to avoid unnecessary data load if content doesn't change often.
The optionalstatic
directory can be used to store any non-XML local content, for example images and prefabricated PDF. You may use subdirectories.
On inserting images into PDF, the static
directory serves as the local root path to the XSLT processor. So you can either use an absolute path (URL) or a relative (local) path that refers to this directory.
If the email-body has a content type like text/html; charset=utf-8
then it may include tags such as <img src="cid:foo/bar.jpg">
. The tag is most commonly an <img>, but can be any tag. The system then searches in the static directory for any file with that path. The file is included with the image, as a “related” multi-part part, meaning the file is available to the HTML document as an "inline image" when its rendered in the email client. Don't forget the cid:
text in the src
attribute!
Instead of returning content from XSLT transformation, you may also return “prefabricated” content from this directory. The syntax <response-from-static filename="path-to-file"/>
returns a file from this directory.
In general <error>
supports the same tags as <success>
. But there are limitations.
Placing <forward-to-endpoint>
inside <error>
is not supported, as this would require “auto increment” numbers to be assigned even in the case of <error>
(as any arbitrary endpoint might be called, which might require them). It is an explicit design goal to only support auto-increment numbers (e.g. for invoice numbers) in the case of <success>
.
On error the same tags can be used as for <success>
, but placeholders for client generated parameter values are not supported. An error might have happened during parameter transformation, and therefore client generated parameters are not necessarily available.
Limited Support of Variables
This limitation not only applies to the use of placeholders in the endpoint-definition, but also for any data-source or XSLT used within a transformation.
System generated parameters may be used.
In addition, for the processing of the <error> tag the following additional parameters are available:
${internal-error-text}
- This contains an internal error message. It is important this is not exposed to any end customer, as it might contain security-sensitive information such as “cannot connect to database at IP address 1.2.3.4” etc.
${parameter-transformation-error-text}
- In case the request failed because the parameter transformation failed, and a message was set in the <error>
tag in its output.
By default, OpenEndpoints does not save any data that was transferred with a request.
Sometimes, however, it can make sense to have data transfer tracking available for error analysis, for example. This is what the debug mode and verbose is for.
With the parameter transformation, OpenEndpoints generates a parameter-transformation-input.xml and - after successful transformation - a parameter-transformation-output.xml. These are only stored in memory and not persisted to disk, by default.
When Debug Mode is activated, these two files are saved in the request log and can be downloaded from the Service Portal. (If the transformation fails, the output.xml is not generated).
Only works with Parameter Transformation
The debug parameter will be silently ignored in case the endpoint does not use Parameter Transformation.
To use the debug mode, either 2 conditions must be met at the same time:
The debug mode must be allowed for this application in the service portal.
An additional parameter is added to the request: debug=true
or
The verbose=true parameter must be sent
The request has an error (4xx or 5xx)
The intention of verbose=true is to capture errors, therefore verbose does not require the application have "debug enabled", in contrast to sending debug=true which captures all requests and thus does require the application to have "debug enabled".
Click "Clear debug log" to delete all files created with debug/verbose mode.
The current configuration of an application is loaded from a GIT to OpenEndpoints (with the "Publisher" in the service portal).
To enable a simple form of "staging", 2 different versions of the GIT repository can be activated at the same time. There are 2 different "environments" for this: PREVIEW and LIVE.
The "Publisher" in the service portal lets you
Publish the latest version of the GIT repository to the PREVIEW environment.
Promote the current version from PREVIEW to LIVE.
By default, all requests are always sent to the LIVE environment. To send a request to the PREVIEW environment, an additional parameter must be added:
Parameter to send a request to the PREVIEW environment
environment=preview
It is not required to add a parameter for the LIVE environment.
Hash calculation
Note that the hash calculation takes the parameter for the environment into account. There are therefore 2 different hash values for the two environments. - see: Authentication.
The optional file email-sending-configuration.xml
under the application directory is required if emails are to be sent from OpenEndpoints.
OpenEndpoints supports two options for sending emails:
Sending via SMTP
Sending via MX address
The optional directory data-source-post-processing
contains custom XSLT files.
See: .
The root tag of the input xml (which is automatically provided by the software) is <data-source-post-processing-input>
. Within that root tag OpenEndpoints will insert the data converted from the data-source-definition or - if applicable - the output from a previous data-source-post-processing step.
The root tag of the output (=the file that is generated by your XSLT) must be <data-source-post-processing-output>
.
The optional file service-portal-endpoint-menu-items.xml
enables the creation of additional, user-defined pages in the service portal.
Pages of type form
and content
can be optionally organized under a menu-folder. The maximum supported depth of menu items is one.
The form
consists of two pages: the "form" to enter data, and the "result" of the form submit. Both pages are defined by a transformer returning an HTML page with mime-type application/html
.
The "form" (=the page returned by calling the form-endpoint) may contain form elements, but without a <form>
tag and without a submit button. The form data are submitted to the result-endpoint
.
The content
consists of a single page defined by a transformer returning an HTML page with mime-type application/html
.
The entire configuration of an application that is built with OpenEndpoints consists of files in a directory - we refer to this as the Application Directory.
An example directory structure is available on Github (public repository):
Directory Structure With XSD + Example Files
For each configuration file or directory, additional explanations are available in the subsections of this page.
Note that is not a working example application. The aim of the example directory is to present the expected syntax and directory structure.
The application directory resides in your own Git repository. There are 2 :
Multi Application Mode (=default): The Service Portal provides a simple user interface to "publish" the latest version from Git into the software. An integrity check with meaningful error messages is carried out automatically. Most configuration errors are detected in this way.
Single Application Mode: You create a new Docker image derived from the standard Docker image, which includes the application directory. You need to build a new Docker image each time your configuration has changed.
The multi-application mode also offers the possibility of simple staging - see .
The software is a little less strict than what is written in the XSD: Don’t be surprised if you find working examples where the order of elements is slightly different from what is described in the XSD.
Any additional files which are present in the application directory, but not required by the software, are silently ignored. For example, if you create an additional directory "project files" that will be ignored by the software without raising an error.
Many configuration files are only required if the corresponding features are used.
In any case, these few files must be present for a working configuration:
endpoints.xml having at least 1 endpoint
security.xml having at least 1 secret key
at least one transformer, which as a matter of facts also requires a data-source
The mandatory file security.xml
under the application directory comprises of one or many secret keys, which are used to calculate hash keys. See:
The mandatory file endpoints.xml
under the application directory comprises of one or many endpoint folders, each of which contains of
one or many endpoint definitions
zero or many
The root tag of the file is <endpoint-folder>
. A simple endpoints.xml
might have a single <endpoint-folder>
only (in this case it is the root tag itself). But it is also possible to create a hierarchy of endpoint-folders:
Parameters may be defined at any level. If they are defined in an Endpoint they only apply to that Endpoint. If they are are defined in a folder they apply to all Endpoints under that folder.
Settings in the child folder will override the parent settings.
Parameters require to have a name
attribute, which is unique within the endpoint-folder. The default value is optional, but using will change the behaviour of the application. See Endpoint Parameter.
Note that a GET or POST request allows multiple parameters with the same name. For example, a web form might have 2 or more form fields with the same name attribute. In contrast to this, the name attributes of the endpoint parameter must be unique. As a consequence - if parameter names are used multiple times in GET or POST requests - several values would be transferred for the same endpoint parameter. The solution is to separate multiple parameter values with a separator. The default separator is a double pipe ||
. The attribute multiple-value-separator
allows to define an alternative separator.
The basic syntax of any endpoint is:
Details for each property can be found in the related sections of this documentation:
Parameter Transformation allows to optionally transform submitted parameter values, for example to validate, normalize or correct submitted values or to create new parameters by calculation or by loading additional data from any data source.
Include-In-Hash Use Cases allows for advanced authentication requirements based on parameter values.
Section Types of Endpoints describes all action types for <success> and <error>.
Tasks include actions such as sending an email, sending a request to any kind of REST-API or logging data into a database.
The optional directory data-source-xslt
contains custom xslt files.
Each transformer references to a data-source, which can be transformed to any output having some custom data-source-xslt file.
See:
The data-source will generate a file with a root tag <transformation-input>
comprising of all content generated by the data-sources.
The mandatory directory data-sources
contains data-source definitions. A data-source-definition contains references to one or many data-sources, which you would like to load data from and convert to output using a transformer.
See: .
OpenEndpoints can interact with third party REST or SOAP APIs, either using such API as a data-source, or to execute a TASK.
The request body of such request can be either inline content (=directly written into endpoints.xml or into the data-source definition), or the body is generated by XSLT transformation.
This optional directory contains custom xslt to generate such request bodies.
The transformation input (=the XML generated by the software, which the XSLT is applied to) does not support arbitrary data-sources, but is limited to parameter values:
The values of the parameters inside that generated xml are different depending on the context:
The input xml contains the parameters from the original request. Note that the original request could have different parameters than those declared in the endpoints.xml - which is possible when parameter transformation is used.
HTTP requests that are part of a regular data-source definition or a task have the same parameters and parameter values in the input xml for generating a request body that are declared in endpoints.xml and whose values can be different after a parameter transformation than in the original request.
You can embed custom fonts into your generated PDF.
In order for custom fonts to be embedded in a PDF, 2 steps must be set:
Upload the TTF files into the fonts
directory. Only TTF fonts are supported.
Declare those fonts in the file apache-fop-config.xsd
.
Every combination of style and weight of a font must be declared as a font triplet in file apache-fop-config.xml
:
Note that the file-name of the TTF font may be different from the name as used in the font-triplet attribute. The font-name as referred to in the XSLT must be the same as in the font-triplet name attribute (case sensitive).
The mandatorytransformers
directory contains one or many transformers defining a .
The only mandatory attribute is the data-source
. Omitting all other tags will output the original data-source (xml) without any transformation.
This optional directory contains custom XSLT for .
If an endpoint uses the parameter transformation, OpenEndpoints automatically generates the temporary XML parameter-transformation-input.xml
, which is the input for the transformation. This is (unless debug is enabled) only stored in memory, and never written to disk. Depending on how data are submitted to the endpoint, the input XML might look slightly different.
In case input was submitted as GET request or POST request containing parameters, the generated parameter-transformation-input.xml
will include submitted parameters.
In case data are submitted as a POST Request with application type application/xml
, the input-from-request
tag contains the XML request body instead of parameters.
In case data are submitted as a POST Request with application type application/json
, the input-from-request
tag contains the json request body converted to xml:
Parameter Transformation Output
The expected output (generated by your custom xslt) requires values for any parameter not having a default value. Not doing so will raise an error. In addition the output might have an error
tag. If present, an error will be raised an the text within the error tag will be available with the system parameter ${parameter-transformation-error-text}
.
A request contains any number of GET or POST parameters.
The values supplied by the user for these parameters are used in various places. For example, if you are fetching XML from a URL, parts of the whole of the URL may be the value of a parameter. The syntax for this is ${param}
. The braces are, in contrast to many other systems, mandatory.
Note that for security or technical reasons replacement is not always possible. See the specific sections for the various configuration elements where parameter replacement occurs.
Case Sensitivity
Parameter names are case-sensitive. That means, the "firstname" is not the same as "Firstname". Case sensitivity is default behaviour in XML.
Which parameters the user may supply must be defined in the endpoints.xml file; any parameter sent to the server which is not defined there is an error.
The simplest form of a parameter definition in endpoints.xml is <parameter name="foo">
. This means the user must supply a value that parameter as a GET or POST parameter like ?foo=value
in the request; not doing so is an error. (A request parameter transformation allows to submit parameters not defined in the request. For details please refer to section .)
Adding a default value makes the parameter optional. If the user doesn't supply anything, the default value will be used.
Parameters are defined, in the simplest case, directly under the root <endpoint-folder>
node of endpoints.xml. Parameters under the root <endpoint-folder>
node are available for all endpoints defined in the file.
An endpoint-folder can contain zero or many other endpoint-folders
It is possible to use "cascading" endpoint-folders. In this case, endpoint-folders and endpoints are arranged in a hierarchy so that certain aspects may be defined once and inherited to all children. The structure is thus a mandatory root <endpoint-folder> which may contain any number children <endpoint-folder> and <endpoint> elements.
The HTTP standard allows multiple GET parameters to be supplied with the same name, like
In this case, all those values are concatenated into the same parameter. The default separator is two pipe characters ||
.
In the example above, the parameter value would be foo||bar
.
The default separator be overridden with the multiple-value-separator attribute:
Note that this property will be inherited in the hierarchy of endpoint-folders. That means it can be overwritten by subordinate folders or endpoints.
There are certain parameters which are always available, and are not provided by the client. These can also be used with the same ${param} syntax.
${request-id}
- this system parameter returns the globally unique id of the request, assigned by OpenEndpoints during the request.
It is possible to load secrets from the AWS Secrets Manager and make them available via ${foo}
parameters, for example in the SMTP configuration.
The relationship between ${foo}
parameters, and which secret they are backed by, is specified in the optional secrets.xml
file, checked in at the root of the application directory.
Note, not all secrets need to actually need to exist in the AWS Secrets Manager. An error is produced only if the parameter (and thus the secret) is actually used. This is to allow “if” statements in the configuration, executing certain commands only on certain environments. If the configuration is executed on a different environment, the secret need not exist, as that part of the “if” statement is never executed.
Secrets are fetched each request. This allows for easy secret rotation. No caching of secrets between requests occurs, and no restart of the application is necessary in the case of updating a secret within AWS Secrets Manager to a new value.
JSON-style secrets, which the AWS Management Console web application creates by default, are not supported. The value of a secret is assumed to be plain text.
This type of syntax specifies that a redirect of the request should be performed. The body of the <redirect-to>
specifies where. Variables, if present, are replaced, in the body.
For example you can redirect a successful request to a "thank.you.html" url.
Parameters like ${foo}
in the body are replaced.
If you use variables, we recommend to use this optional element to prevent a malicious request redirecting somewhere wrong:
If no such tag is present, redirect to any URL is allowed. If one or more are present, the URL being redirected to must start with the prefix of one of them; otherwise this is an error.
This type of syntax specifies that the response content is fetched from the named file within the static directory. Subdirectories such as filename="sub-dir/file.pdf"
are supported.
Variables are not allowed in the filename attribute. This is to prevent accidentally making more files accessible (if the client guesses the right filename) than intended. But you can use to deliver different files depending on different parameter values.
You can optionally add an attribute "download-filename".
If the download-filename="foo.pdf"
attribute is present, then the header in the HTTP response is set indicating that the file should be downloaded as opposed to displayed in the browser window. Parameter Values like ${foo}
are replaced.
Potential Source of Error
Make sure that the filename does not contain empty characters, because this will raise an error.
This type of syntax specifies that an HTTP request is performed and the result of this request is streamed back to the client as the response of the call to Endpoints.
You can optionally add an attribute "download-filename".
If the download-filename="foo"
attribute is present, then the header in the HTTP response is set indicating that the file should be downloaded as opposed to displayed in the browser window. Parameter Values like ${foo}
are replaced.
Potential Source of Error
Make sure that the filename does not contain empty characters, because this will raise an error.
In endpoints.xml
, <endpoint>
has two sub-sections, <success>
and <error>
. What happens in the case of success or error depends on the tags being present. Details are described in the following subsections.
If the tag (<success>
or <error>
) is missing, or present and empty, this means the server returns an empty 200 OK in the success case and 400 error in the case of failure. This can be useful if the request should simply perform some tasks e.g. send emails.
This type of syntax specifies that the transformation named in the attribute is performed, and its contents are returned to the client in the HTTP response. For example, the transformation might produce HTML to be displayed in the browser, or a PDF to be downloaded.
You can optionally add an attribute "download-filename".
If the download-filename="foo.pdf"
attribute is present, then the header in the HTTP response is set indicating that the file should be downloaded as opposed to displayed in the browser window. Parameter values like ${foo}
are replaced.
Potential Source of Error
The download-filename
attribute will raise an error if it contains a blank or a special character. This is because browsers are inconsistent in how they handle download filenames with special characters. In order to create a "write once, works everywhere" experience special characters are not supported.
<OpenEndpoints/> transforms content from CMS, REST-APIs, databases and static files into other content structure and other content types. The technical solution is XML Transformation.
This requires
XML as a data-source
XSLT to transform XML into other content structure and content type
XSLT (Extensible Stylesheet Language Transformations) is a language for transforming XML documents into many different formats: other XML documents, or other formats such as HTML, PDF (precisely: XSL-FO), or plain text which may subsequently be converted to other formats, such as JSON. XSLT is an open, stable and established technology. Read more here: .
While XSLT is incredibly versatile and powerful, XML is not a really common type of data-source in the web - which we sincerely regret, because XML is just perfect to describe all sort of semantic content and provide established tools to manipulate that content. This is where <openEndpoints/> comes in:
XML transformation for non-XML data-sources
<OpenEndpoints/> automatically converts various data-source types into XML and applies XSLT to transform original content into something new.
Required components are:
The data-source
The XSLT
The "transformer" - which basically combines the data-source and the XSLT in order to generate new output.
In the data-sources directory under the application, there are zero or more files, each describing a data-source.
A data source is a list of commands (e.g. fetch XML from URL) which produce XML. Each data source is executed, and the results are appended into an XML document (e.g. fetch XML from two URLs, then the result of the data source will be an XML document with two child elements, which are the XML fetched from the two URLs).
The data source file contains the <data-source>
root element then any number of data-source command in any order:
The resulting XML document has the root tag <transformation-input>
. The results of the command are appended, in order, directly underneath this tag.
The XML Transformation is stored in a single file ("xslt-file"). In the data-source-xslt
directory under the application, there are zero or more XSLT files than can be used for your transformations. You can use subdirectories to organize your xslt files.
The data-type of the generated output is determined by the XSLT file.
In the transformers directory under the application, there are zero or more files, each describing a transformation. The transformation determines which XSLT to apply on which data-source.
In contrast to content loaded from the internet, XML already existing within the application does not require to be loaded each time. This makes it a good choice for large files. It is possible to use XML files with more than 100,000 lines of content without causing any performance issues. In general, the limiting factor is the performance of the XSLT transformation rather than the size of the data source.
You can place such files under the the xml-from-application
directory within the application. You can load the content of this file into the data-source:
You can use placeholders to fill in endpoint parameter values into the file attribute:
If the file can not be found, an error will be raised. To avoid that, add an attribute ignore-if-not-found="true"
. The data-source in this case will look like if the content was not requested at all.
For further details about how to send emails see:
Native XSLT can produce XML, plain text or HTML. A special markup of XML is XSL-FO, which can be converted into PDF. The option to trigger conversion of XSL-FO into PDF is described .
for post-processing the result enable flexible application for various practical use cases.
If you wish to capture the input/output of a transformation see .
You can fetch content from any REST API and use it as a data-source in <OnestopEndpoints/>.
The data-source command to fetch xml or json data from any URL is <xml-from-url>
. JSON or HTML returned to this command will automatically converted into XML.
For JSON to XML conversion:
Any characters which would be illegal in XML (for example element name starting with a digit) replaced by _xxxx_
containing their hex unicode character code.
Note that if any JSON objects have a key _content
, then a single XML element is created, with the value of that _content
key as the text body, and other keys from the JSON object being attributes on the resulting XML element.
<url> is mandatory. That should ne no surprise ;-)
<method> can be "POST" or "GET". If omitted "GET" will be used as a default.
Zero or many <get-parameter>, zero or many <request-header> and zero or one <basic-access-authentication> - all optional.
$inline[badge,Highlight,success] The beauty of <OpenEndpoints/> shows in the solution of the optional request body, which can be JSON or XML. There are several different options how-to build the content for the request-body.
The request body is expressed as XML within the <xml-body> tag. Endpoint parameters are expanded.
Uploaded content encoded in base64 can be filled into any tag of the request body. This requires 2 actions:
Add attribute upload-files="true"
to <xml-from-url>
Add to any element of your request body attributes upload-field-name="foo" encoding="base64"
The uploaded content will expand into that XML element.
base64 encoded content only
The expansion of uploaded content works for base 64 encoded content only!
It is also possible to send generated content within a request body:
Add attribute expand-transformations="true"
to <xml-from-url>
Add to any element of your request body attributes xslt-transformation="foo" encoding="base64"
Adding that attribute to the element indicates that the transformation with that name should be executed (for example, generate a PDF file), and the contents of the resulting file should be placed in this tag. The encoding is always base64, no other encodings are supported.
The request body is generated by XSLT. This leaves maximum flexibility to build different content of the request body depending on endpoint parameter values!
Note that this is a transformation within a transformation. The XSLT takes a <parameters> as its input document; This XSLT does not have access to the results of any other data sources. The reason is, that data sources cannot use data produced by another data source.
The XSLT file is taken from the http-xslt directory.
The transformation-input to apply that XSLT has <parameters> as its root tag.
The optional attribute upload-files="true"
and expand-transformations="true"
may be present, as above.
The request body is expressed as JSON within the <json-body> tag. Endpoint parameters are expanded.
Endpoint parameters are expanded within the string context of JSON, that is to say that no concern about escaping is necessary.
Options for expanding base64 content from file upload or generated content is not available for JSON.
The request body is generated by XSLT. That requires that the result of the transformation is valid JSON.
Note that this is a transformation within a transformation. The XSLT takes a <parameters>
as its input document, see above "XML Request Body from Transformation" for the format of that block.
By default the root-tag of the generated output is <xml-from-url>. Use the optional tag attribute to generate any different root-tag:
Different results may be required for the same endpoint based on certain criteria. It is possible to define any number of success elements such as:
The conditions are considered in the order they’re written in the file, so put more general “catch-all” items at the bottom and more specific “if...” items at the top.
Parameters may also be used in the "equals" or "notequals" attribute. You could for example create a condition like
Only if=".." equals=".."
and if=".." notequals=".."
are available.
If the parameter has a value like foo||bar
i.e. created as a result of a request such as ?param=foo¶m=bar
then the equals=".."
will check if any of the values match, and notequals=".."
will check that none of the values match the value.
This lists the most recent object keys (filenames) out of the bucket specified in the aws-s3-configuration.xml file (see example-customer for example format.).
“Most recent” means the keys of the objects with the most recent last modified timestamp.
The command looks like:
and the results look like:
This reads a particular object (file) from AWS S3. It is assumed that this object contains XML data. The command looks like:
and the results look like:
The data-source command <literal-xml>
lets you define xml output directly and "literally" in the data-source definition file.
The root tag <literal-xml>
is not included in the data-source xml output. In the example above, the generated xml will be:
This data-source-type can be perfectly used in combination with parameter placeholders. For example, you can use something like this:
If ${foo}
equals "hello world", the data-source output will be:
Note that the contents of must be elements, simply placing text straight under the <literal.xml> element will not work.
This content-source produces as its output a description of the entire application directory structure (=your configuration).
The generated content has a root-tag <application-introspection>
and returns
<directory name="x">
for any directory
<file name="x"/>
for all XML files. The content of the XML file is included as a child of this tag, except the directory xml-from-application
. (Use the <xml-from-application>
data source, not <application-introspection>
to load content from such files.)
<file name="x"/>
for all non-XML files. In this case the content is not in any way included.
XML files must actually contain XML
If a file named *.xml
does not in fact contain well-formed XML, this is an error.
No expansion of endpoint parameters
Parameters like ${foo}
found in the file are not expanded in this type of content-source.
<OpenEndpoints/> can generate unique auto-increment values and provide them as a data-source. Read On-Demand Incrementing Number for more details.
The data-source command <xml-from-database>
fetches rows from a database and transforms rows and columns into XML.
Currently only MySQL and PostgreSQL are supported. Other databases would require other client JARs that are not provided in the current version of the software.
Alternative option using the environment-variable to connect to your local endpoints database - in this example (sql!) fetching request details from the request-log:
<jdbc-connection-string> specifies how to connect to the database to perform the query. This element is mandatory.
If it is present with no attributes, the body of the tab specifies the JDBC URL. Using a CDATA section is recommended to avoid having to perform XML escaping. Don't forget that username and password values must be URL-escaped.
If it is has an attribute from-environment-variable="foo"
then the environment variable with that name is read and should contain the JDBC URL. Note that endpoints parameters are NOT expanded in the name of the variable name, to prevent an attacker having access to other environment variables.
<sql> should be self-explanatory :-)
Endpoint parameters are NOT expanded as that would allow SQL injection attacks.
For PostgreSQL, for non-string parameters, ?::int or ?::uuid it is necessary to cast the string supplied by the endpoint parameter into the right type for PostgreSQL.
Zero or more <param> elements, whose body are the contents of any "?" in the <sql> element. Here, endpoint parameters ARE expanded.
Generated output looks like this:
By default the root-tag of the generated output is <xml-from-database>. Use the optional tag attribute to generate any different root-tag:
This feature can be used to download e.g. Word or Excel files.
Only e.g. DOCX and XLSX etc. are supported; old-style DOC and XLS files are not supported.
The body of the document may contain parameter references such as ${foo}
, these are expanded to the parameters available through the endpoint processing.
The endpoint application directory may contain an ooxml-responses
directory and within that any referenced files must be present, for example foo.docx
in the example above.
Endpoint parameters can be used as placeholders in your data source. On executing the data source definition any parameter placeholder will be replaced by the respective value of the parameter.
You can only use parameters defined in the endpoints.xml file of your configuration.
You can not use
intermediate parameters (from a task)
content submitted from a file-upload (or - more generally - any additional content submitted from a multi-part message)
If my parameter is called "foo" ...
then I can use as a placeholder:
Placeholders can be used inside the data-source.xml file.
For example, you can select the specific piece of content loaded from CMS by evaluating a submitted parameter value. Or you could load specific database rows selected for a specific parameter value.
For security or technological reason this does not work in every case. For details please refer to the specific sections:
On loading content from any of these data source types placeholders will be automatically replaced by its parameter values:
xml-from-application
xml-from-url
For example, you can use ${foo} as a placeholder directly in your CMS. On loading data from your CMS, actual values will replace the placeholders.
Potential Source of Error!
Using a placeholder for parameters not decalared in your endpoints.xml will raise an error!
For example, if you use ${firstname} in your CMS, but a parameter "firstname" is not existing in your application, this will not work.
There is the possibility of adding instructions to output the input/output of the transformation to AWS S3.
This creates objects in the S3 bucket which have the tags as specified, the correct Content-Type, and in addition a tag called "environment" which is either "preview" or "live".
This type of syntax specifies that an endpoint "step-1" is executed and on success another endpoint "step-2" will be called.
The result of the original request (i.e. all parameters) is used as the request to the endpoints being forwarded to:
All parameter values are forwarded to the new endpoint.
System parameters such as user agent, client IP address and file uploads are all available at the endpoint forwarded to. They are inherited to the forwarded endpoint.
It’s possible to chain the execution of any number of endpoints in this manner (e.g. endpoint e1 forwards to e2 which itself forwards to e3). A circular chain of such references is not allowed as the processing of such a chain would never end.
The “redirect” from one endpoint to another happens within the Endpoints software; no redirect is actually sent to the user’s browser.
Only one “request log” gets written, despite a chain of multiple endpoints being processed. Only the first “parameter transformation input/output” is saved with that “request log” entry, despite each endpoint in the chain potentially having its own parameter transformation.
The beauty of <OpenEndpoints/> is its potential to produce custom content on demand. Practically, that means: Data submitted from a webform (or any other method of submitting data) can directly influence
what content is loaded from which data sources
what exactly to transformation of a content shall look like.
Endpoint parameters come from the following sources:
Parameters submitted from the originating request to the endpoint including, but not limited to, ?x=y GET parameters
The result of a parameter transformation (for example, extracting parameters from arbitrary XML sent as a POST)
An "intermediate value" generated by one task and consumed by future items (for example, results from an HTTP request which was sent during the processing)
This data source type will output all available parameter values and make it available for data-source transformation.
The generated content looks like this:
A special type of parameter value is an uploaded file. For example if you are submitting data from a webform which has <input type="file" name="my-upload"/>
, then the presentation of input depends on the type of uploaded content.
If the uploaded content can be parsed as XML, this xml will be available in the data-source:
If the uploaded content can not be parsed as XML, the output is:
XML content <> xml-file-extension
Whether a file upload is XML or not is determined by whether the XML can be parsed as XML. The uploaded filename and Content Type are ignored, to allow files such as SVGs which have neither an XML file extension, nor an XML Content Type.
Intermediate parameters are not regular endpoint parameters, i.e. they are not defined as a <parameter> in endpoints.xml and their value does not come from the original request. Intermediate outputs are generated from tasks that execute during the processing of the request. On forwarding an endpoint to another endpoint those parameters will be made available in the parameters-data-source as well:
By default the root-tag of the generated output is <parameters>. Use the optional tag attribute to generate any different root-tag:
Sometimes it may be required to insert a unique incremental id into a generated content.
Depending on the specific business use case, the incremental id may be unique “perpetual”, or it may be required to re-start the counter every month or every year.
Type
Example Value
perpetual
23456
year
2020-0068
month
2020-01-0017
Request UUID
In addition, each request submitted to <OnestopEndpoints/> gets assigned a globally unique UUID in the transaction-log. You can access this id in the parameter-transformation, but it is not available as a data source.
The command to fetch a new auto-increment value is a data-source.
Whenever a transformer has a data-source with that command the auto-increment will be triggered. The type attribute may take the values “perpetual”, “year” or “month”. The numbers are unique within the application.
Formatting
Note that the data source returns a number only. If the request has the incremental number "17" for the current month, the value provided (for type="month") will be 17. In order to get something like 2020-01-0017 you need to build such format with XSLT.
The term “on-demand” refers to the fact the number does not get consumed unless it is requested. If one value (e.g. type=”month” ) is consumed, other values (e.g. type=”year”) are not automatically consumed as well.
The value is only consumed if the endpoints request is successful; if the request is not successful the number is again made available to future requests. The numbers do not have any “holes” or missed-out numbers, so are suitable for use in invoice numbers.
The same endpoint may contain several different transformers, some of which might call the same data-source. For example, the endpoint may include a task to send an email, and the email-body and some attachment both require the same data-source. In this case - the data source is used twice within the same request - they both see the same number. The new incremental id is created per request, not per use of the (same) data-source. However, if you use 2 different data-sources, both calling an auto-increment, then 2 different ids will be created.
The incremented values will be stored in the database. Changing the values in the database will effect the next generated number.
The parameter-transformation-xslt will use parameter-transformation-input.xml as an input and will generate parameter-transformation-output.xml.
The required output scheme is:
Each parameter existing in endpoints.xml must be present in the output, except if the parameter has a default-value (which will be applied if the parameter were missing in the output).
Output of a parameter not existing in endpoints.xml will raise an error.
If the same parameter appears multiple times, then later values override earlier values.
Optionally an <error> tag can be added. If existing, an error will be raised. The custom error message is taken from this tag. Note that an empty error tag <error/> will also raise an error, but with an empty error message.
Offer-Ready is built around the concept of dynamically generated content. But of course parts of the content required to build your endpoints may not require being dynamically generated, because they already exist as some sort of “static” content.
Static content such as images or files may be stored in the application’s static directory.
Image paths within an XSL-FO file may have an absolute path (to some URI), or a relative path.
Absolute paths will be "executed" by the PDF reader. Make sure that the path is accessible from your client's device.
Relative paths will use the application’s static directory as a root directory. Images will be embedded in the generated PDF.
Inputs from request and data from optionally added data sources are automatically placed into a temporary XML called <parameter-transformation-input>.
The custom XSLT ("parameter-transformation-xslt") will be applied to this XML.
OpenEndpoints will automatically insert additional useful tags:
all parameters submitted in the originating request, having <parameter name="xxx" value="xxx"/>. This explicitly includes the system parameters hash and environment (if not omitted; otherwise the parameter is not existing).
endpoint: name of the endpoint that has been called.
debug-requested: if existing the request had a parameter "debug=true". Otherwise this tag will be omitted.
<http-header name-lowercase="user-agent">Foo</http-header> This may be present multiple times, or not at all. HTTP Headers are case insensitive so e.g. “User-Agent” and “user-AGENT” are the same header. Therefore these are normalized to all lowercase.
<cookie name="Session">12345</cookie> This may be present multiple times, or not at all. Cookies on the other hand are case sensitive, so it’s possible to have “Session” and “SESSION” as two different cookies with different values, so these aren’t normalized to lowercase.
ip-address
application: the name of the application
application-display-name: the display-name of the application, if available from the database.
git-revision: if the application was published from Git via the Service Portal then this contains the Git hash for the revision. (If the application is deployed in "single application mode" then this tag is omitted.)
debug-allowed: if existing debugging is set to "allowed" in the database. Otherwise this tag will be omitted.
secret-key: one separate tag for each secret-key
random-id-per-endpoint: the database request-log adds a random id per request per application
base-url: The base-url of the application is taken from an environment variable.
The json paylod is converted to xml.
Create the parameter-transformation-tag
The generated parameter-transformation-input.xml is available in the request-log.
A data source is a list of zero or many commands which fetch content from different content sources and produce XML. The resulting xml output is the input ("transformation-input") for XSLT to generate output documents.
Sometimes it makes sense to apply one or many intermediate steps to modify the loaded content before it becomes a "transformation-input". Possible reasons for this include
You might want to reuse the same XSLT to generate an output document for different content sources, but these content-sources do not produce the exactly same structure of input. An intermediate transformation step can be used to "normalize" the input among different content sources.
A complex transformation might be implement more elegant by splitting it into several subsequent steps.
A data-source-post-processinig.xlt can apply xml-transformations within the data-source object.
In the data-source-post-processing-xslt directory under the application, there are zero or more files, each describing a post-processing transformation step. Each file is a XSLT which expects input-data with a root tag <data-source-post-processing-input> and which shall produce any output xml with a root-tag <data-source-post-processing-output>.
Content loaded from source A:
Expected output:
You can add zero or many data-source-post-processing.xslt to each content-source of your data-source object. For each content-source post-processing will be executed separately. Multiple steps for the same content-source will be executed subsequently in the order of post-processing-xslt files.
In addition you can apply the same logic for the entire data-source object. In this case all content-sources are loaded as a first stept, and post-processing applies for the collection of all content-sources.
In the transformers directory under the application, there are zero or more files, each describing a transformation. Subdirectories are not supported.
The root element of each transformer file is <transformer>
.
The root <transformer>
element has with a mandatory attribute data-source
. The is the name of the data-source without file-extension. For example, if you have a file my-data-source.xml
in the data-sources directory, then the correct attribute value is data-source="my-data-source"
.
The <xslt-file>
element is optional. If omitted, the data-source will be returned without XSLT transformation. The name
attribute in XSLT file element is mandatory. It is the file-name of an XSLT file including the file-extension. Possible file extensions are ".xslt" or ".xsl".
The optional content-type
element sets the mime-type of the generated output. The type
attribute is mandatory. The value of this attribute is the mime-type that shall be set. If no content-type were set, heuristics are used by Endpoints to guess an appropriate content type.
Potential Source of Error!
Using a placeholder for parameters not decalared in your endpoints.xml will raise an error!
For example, if you use ${firstname} in your CMS, but a parameter "firstname" is not existing in your application, this will not work.
Generating a REST API Request-Body
REST APIs often require a specific MIME type for the request body. Use the content-type
element to set the required value.
The data-source will be wrapped into a root-tag <transformation-input>
.
Note that this is useful for developing and debugging a data-source transformation. Omitting the xslt-file element returns exactly the input, which your xslt will be applied to.
The correct content-type is set automatically. It is possible to deliberately set a different content-type, but we do not recommend to do so.
The correct content-type is set automatically. It is possible to deliberately set a different content-type, but we do not recommend to do so.
Note that in the example above the XSLT produces XML, which is then converted to JSON. An alternative option to generate JSON is to have XSLT with output type "text". In that case the correct syntax is different:
JSON Syntax
Conversion of XML to JSON can be done in different ways. If you need a specific JSON syntax, XSLT generating JSON might be the better option compared to generating XML with option .
UTF-8
XSLT output by default is UTF-8. Use <content-type type="text/plain; charset=xxx"/>
to set a specific charset if required. In such case the generated output needs to match that specific charset, of course.
XSLT can not generate output of type Excel. <OpenEndpoints/> offers a workaround which converts a simple HTML table into Excel binary format.
The format is chosen to be as similar to XHTML as possible. The syntax is as follows:
HTML should contain <table>
elements.
These should contain <tr>
elements and within them <td>
(or <th>
) elements.
Excel files differentiate between "text cells" and "number cells". The contents of the <td>
are inspected to see if they look like a number, in which case an Excel "number cell" is produced, otherwise an Excel "text cell" is produced.
The attribute <convert-output-xml-to-excel input-decimal-separator="xxx">
affects how numbers in the input HTML document are parsed.
"dot" (default). Decimal separator is ".", thousand separator is ",".
"comma". Decimal separator is ",", thousand separator is ".".
"magic". Numbers may use either dot or comma as thousand or decimal separator, or the Swiss format 1'234.45. Heuristics are used to determine which system is in use. (This is useful in very broken input documents that use dot for some numbers and comma for others, within the same document.) The numbers must either have zero decimal (e.g. "1,024") or two decimal places (e.g. "12,34"). Any other number of decimal places in the input data will lead to wrong results.
The number of decimal places in the <td>
data are taken over the to Excel cell formatting. That is to say, <td>12.20</td>
will produce an Excel number cell containing the value 12.2 with the Excel number format showing two decimal places, so will appear as 12.20 in the Excel file.
To force the cell to be an Excel text cell, even if the above algorithm would normally classify it as an Excel number cell, make the table cell with <td excel-type="text">
.
The colspan
attribute, e.g. <td colspan="2">
, is respected.
The following style elements of <td>
are respected:
style="text-align: center"
(Right align etc. is not supported)
style="font-weight: bold"
style="border-top:"
(Bottom borders etc. are not supported)
style="color: green"
, style="color: red"
, style="color: orange"
(Other colors are not supported.)
<thead>
, <tfoot>
and <tbody>
are respected. (Elements in <tfoot>
sections will appear at the bottom of the Excel file, no matter what order the tags come in in the HTML.)
Column widths are determined by the lengths of text within each column.
Any <table>
which appears inside a <td>
is ignored (i.e. tables may be nested in the HTML, only the outermost table is present in the resulting Excel file.)
The contents of any <script>
elements are ignored
The contents of any other tags such as <span>
and <div>
are included.
Table rows which contain only table cells which contain no text are ignored. (Often such rows contain sub-tables, which themselves are ignored. Having empty rows doesn't look nice.)
RTF can be generated with XSLT using output type text. Set the correct content-type to open a downloaded (generated) RTF with WORD.
XSLT parameters can be useful to re-use the same XSLT for different transformations.
The parameter value can be set in the transformer file:
Note that variables ${foo} are not supported with this feature.
Some email programs may have issues with long links. Links to endpoints (containing all parameters) may get long, so this can become a problem.
The “Short Link To Endpoint” feature allows shorter links to endpoints (including all parameters) to be created. This is analogous to , with the exception that rather than the destination endpoint getting executed immediately, a link is created to the processing of that endpoint.
The task creates a short-link in the database with a random code. The resulting full link, including the code and also including the base URL of the current installation of Endpoints.
The short link looks like this: [base-url]/shortlink/RANDOMCODE.
The generated link is written to an output intermediate variable. The concept of intermediate valriables is described here: Intermediate Values.
The shortlink will be auto-deleted in the database after the time specified in expires-in-minutes
. For example, if you put expires-in-minutes="1440"
then the link will be available for 1 day. After that time the link will not work any longer.
Use a syntax like the following to create a short link to an endpoint in the variable ${foo}. (You can choose any other variable name, of course).
The variable ${foo} can then be used as an input-intermediate-value in a subsequent task.
For example, you can send an email containing ${foo} in the email-body. The xslt (to create the email-body) would look like this:
You can add custom key/value pairs to your request-log.
For example, if you want to add the "country" submitted in your contact workflow to your request log, you simply create a parameter ${country} and add this task to your endpoint:
If an endpoint is forwarding the request to another endpoint, then the initial "parent" endpoint will remain the only entry in the log. However, you can add key/value pairs to this "parent" log entry in each subsequent step.
To send an email from OpenEndpoints you need to
Configure your email server
Create an email task
To send email from your application the file "email-sending-configuration.xml" must be present under application.
The file has the root element <email-sending-configuration> and have the following sets of sub-elements.
If no username and password are set, TLS will not be used
extra headers are written into every email sent via SMTP, for example authorization headers for a commercial email sending service
Any of the fields (apart from the header names) may use ${foo}
parameters.
An alternative option is to configure an MX address for the DNS lookup.
The task <task class="endpoints.task.EmailTask"> sends an email. It has the following sub-elements configuring it:
• <from> is mandatory (variables are expanded)
• <to> is mandatory (variables are expanded). There may be multiple <to> elements. Each <to> sends a separate email, to just this recipient. Per <to>, only one recipient address is allowed
• <subject> is mandatory (variables are expanded)
• <body-transformation name="a-transformation"/> is mandatory, and can appear multiple times. This references a transformation (see below). All the different results are placed into a ”multipart/alternative” email part. It would be normal for one referenced transformation to produce HTML and the other plain text.
• <attachment-static filename="path/foo.pdf"> takes the foo.pdf file out of the static directory and includes it as an attachment in the email. Variables are not allowed in the filename attribute.
• <attachment-transformation name="a-transformation" filename="invoice-${invoice-number}.pdf"/>. For each of the elements, the transformation is executed, and the resulting bytes are attached as a file to the sent email. The name of the file is specified in the filename attribute, variables are expanded.
• <attachment-ooxml-parameter-expansion source="foo.docx" filename="invoice-${invoice-number}.pdf"/> will read in the file “foo.docx” from the “ooxml-responses” directory under the Endpoint's configuration and replace any ${foo} variables in the document's body, and deliver it. Only DOCX is supported; DOC is not supported. The name of the file is specified in the filename attribute, parameters like ${foo} are expanded.
• <attachments-from-request-file-uploads/>. This includes as attachments all file uploads that have been uploaded to this request. Any attachment may (optionally) have attributes such as if="${foo}" equals="bar".
If the body has a content type like text/html; charset=utf-8 then it may include tags such as <img src="cid:foo/bar.jpg">. The tag is most commonly an <img> but can be any tag.
The system then searches in the static directory for any file with that path. The file is included with the image, as a “related” multi-part part, meaning the file is available to the HTML document when its rendered in the email client.
A task is like a ”secondary action” assigned to an endpoint. The primary action is the <success> and <error> action described in . The "secondary" action may contain zero or many tasks, that will be executed once the primary action has been completed successfully.
Task vs Primary Action
The task does some action, but the response body of the task is not part of the response to the client's request. The response to the request might be a simple "status 200", while the request had triggered the execution of a task which did have a response body.
With a task you can
send a request to any API =>
send emails with attachments =>
Each <task> is a set of commands embedded into an endpoint-definition in the endpoints.xml:
Parameter Transformation is an option for the advanced processing of inputs from a request. It allows to
have different names for parameters submitted and parameters declared in endpoints.xml
produce parameter values different from submitted values
validate input data and create custom error messages, implementing custom validation rules
process XML body as input (content type "application/xml")
To use parameter-transformation add a tag <parameter-transformation> to the endpoint-definition in the endpoint-folder. The xslt (mandatory) is from the directory "parameter-transformation-xslt" under application. You may use subdirectories to organize XSLT in this directory.
Optionally zero or many data sources may be added. Syntax for adding data sources is the same as for .
In the absence of "parameter transformation" the default behaviour for processing data inputs is:
Data can be sent as GET or POST parameters only. It is not possible to supply request type "application/xml" as an input.
The name of the parameter sent must be existing as a parameter-name in endpoints.xml.
Sending parameters not existing in endpoints.xml will cause an error.
For every parameter existing in endpoints.xml a value must be submitted, unless a default-value exists. Otherwise this will cause an error.
The value of each parameter equals the value of the respective parameter submitted.
If the request is a GET/POST with parameters then all parameters are taken and <parameter name="x" value="y"/> elements are created - no matter if declared in <endpoints.xml> or not. If the request is a POST with an XML body then the XML is taken as is.
Any optionally specified data sources are executed. Any GET and POST parameters may be accessed with the ${x} syntax (see the data source descriptions for where parameters may be used). Any parameter which is referenced but not supplied with the request is left empty (an error is not produced) as the point of the parameter transformation is to determine errors. An error being produced by a missing parameter would then not allow the parameter transformation to produce a custom <error> output.
If the result of the transformation includes <error>Param 'x' must be an integer</error>, this error message is returned to the user, and no further processing is performed. The absence of an error tag is considered a success (i.e. there is no “success” tag or similar).
Parameter values are extracted from the result of the transformation.
Normal parameter processing steps are taken (default values are applied, an error if values are missing, etc.).
OpenEndpoints supports any kind of HTTP request to other systems. You can call and fetch data from
REST Api
SOAP interface
simple URL that returns content
Examples:
fetch data from any CRM or ERP system (as long as it offers a REST or SOAP API that can be called from the internet)
upload generated files to a CRM or an archive
fetch the next available invoice-number from your accounting system, generate an invoice and send it as an email
validate the existence of an address by calling an external validation service
Task vs Data-Source
Interaction with external APIs is available for both, loading data for the purpose of a data-source transformation (see ), or for the execution of a task - which is content of this section. Technically both types of application are very similar, but there are some differences:
A task can have a "condition" based on a parameter value. While a request as part of a data source will always be triggered (on using the data source), a task may be triggered only if a certain condition applies.
The response body of a task can be parsed to generate values that do not come from the user's request, instead they come from that task. Such "intermediate values" can be used like parameters in subsequent tasks.
This task performs an HTTP request, checks the response is a 2xx OK, and ignores the response body.
Redirects are not followed.
The attribute ignore-if-error="true" may be present on the <task> element to indicate that if an error occurs (e.g. server not found, non-2xx response, etc.) this error is ignored. By default, the error aborts the processing of the endpoint.
The request body is expressed as xml within the <xml-body> tag. Endpoint parameters are expanded.
Uploaded content encoded in base64 can be filled into any tag of the request body. This requires 2 actions:
Add attribute upload-files="true" to <xml-from-url>
Add to any element of your request body attributes upload-field-name="foo" encoding="base64"
The uploaded content will expand into that xml element.
base64 encoded content only
The expansion of uploaded content works for base 64 encoded content only!
It is also possible to send generated content within a request body:
Add attribute expand-transformations="true" to <xml-from-url>
Add to any element of your request body attributes xslt-transformation="foo" encoding="base64"
iAdding that attribute to the element indicates that the transformation with that name should be executed (for example, generate a PDF file), and the contents of the resulting file should be placed in this tag. The encoding is always base64, no other encodings are supported.
The request body is generated by XSLT. This leaves maximum flexibility to build different content of the request body depending on endpoint parameter values!
Note that this is a transformation within a transformation. The XSLT takes a <parameters> as its input document; This XSLT does not have access to the results of any other data sources. The reason is, that data sources cannot use data produced by another data source.
The XSLT file is taken from the http-xslt directory.
The transformation-input to apply that XSLT has <parameters> as its root tag.
The optional attribute upload-files="true" and expand-transformations="true" may be present as above.
The request body is expressed as json within the <json-body> tag. Endpoint parameters are expanded.
Endpoint parameters are expanded within the string context of JSON, that is to say that no concern about escaping is necessary.
Options for expanding base 64 content from file upload or generated content is not available for JSON.
The request body is generated by XSLT. That requires that the result of the transformation is valid JSON.
Note that this is a transformation within a transformation. The XSLT takes a <parameters> as its input document; This XSLT does not have access to the results of any other data sources. The reason is, that data sources cannot use data produced by another data source.
The XSLT file is taken from the http-xslt directory.
The transformation-input to apply that XSLT has <parameters> as its root tag.
Options for expanding base 64 content from file upload or generated content is not available for JSON.
Allow debugging and send a request with debug=true. For details see
Offer-Ready uses multiple cores, if available. If not specified otherwise, tasks are executed in parallel and will be finished in an arbitrary order. In order to determine a distinctive order of tasks, read .
If a task requires the output from a second task as an output, you may use .
Input from [1] and [2] are placed into an XML called "parameter-transformation-input.xml". See for details.
The XSLT (from directory parameter-transformation-xslt under application) is applied on "parameter-transformation-input.xml". The generated output is called "parameter-transformation-output.xml". The generated output requires a specific schema. See for details.
The beauty of <OpenEndpoints/> shows in the solution of the optional request body, which can be json or xml. There are several different options how-to build the content for the request-body.
Any task can be made conditional, that means the task will only be executed if some parameter value matches a condition.
The current set of operators supported are:
if="..." equals="..."
if="..." notequals="..."
if="..." isempty="true"
if="..." hasmultiple="true"
if="..." gt="..."
if="..." ge="..."
if="..." lt="..."
if="..." le="..."
Note the syntax of the if
condition: Either side can use parameter placeholder.
If the parameter has a value like foo||bar
i.e. created as a result of a request such as ?param=foo¶m=bar
, then the equals=".." will check if any of the values match, and notequals=".." will check that none of the values match the value.
For the gt
, ge
, lt
, le
operators the comparison values will be treated as numbers (decimal). If either side are empty or not parseable as a number, the comparison is false.
The right hand side of isempty
and hasmultiple
can be true
or false
.
By default the Saxon-HE product from Saxonica is used, which is open-source and requires no license fees.
Basically, the software also supports the use of commercial versions of Saxon - see (Link Removed). When Saxon-PE is available, the following additional functions for XSLT will be available in OpenEndpoints:
<xsl:value-of select="uuid:randomUUID()" xmlns:uuid="java:java.util.UUID"/>
<xsl:value-of select="math:random()" xmlns:math="java:java.lang.Math"/>
- Generate random number between 0 and 1, e.g. 0.37575608763635215.
<xsl:value-of select="base64:encode('foo')" xmlns:base64="java:com.offerready.xslt.xsltfunction.Base64"/>
<xsl:value-of select="base64:decode('Zm9v')" xmlns:base64="java:com.offerready.xslt.xsltfunction.Base64"/>
- assumption is that the encoded text is UTF-8 text.
<xsl:value-of select="digest:sha256Hex('foo')" xmlns:digest="java:org.apache.commons.codec.digest.DigestUtils"/>
<xsl:value-of select="reCaptchaV3:check('server side key', 'token-from-request')" xmlns:reCaptchaV3="java:com.offerready.xslt.xsltfunction.ReCaptchaV3Client"/>
yields a number from 0.0 to 1.0, or -1.0 in the case a communication error has occurred (see log for more details of the error)
In most cases, the XML that you are transforming will have a schema (xsd) that can be used when developing the XSLT. The XML standard also provides for reference to the xsd directly from the XML.
Our implementation of the XSLT processor does not expect any reference to an XSD, and ignores this reference if it were present.
Do not rely on data-type definition from your xsd
For the development of the XSLT, we recommend always specifying data types in a dedicated manner ("cast as").
The global section of XSLT can contain parameters which are intended for the transfer of values for controlling the XSLT.
The root element of a transformer
(see: Data Source Transformation) can have elements like <placeholder-value placeholder-name="x" value="y"/>
which will be passed to the XSLT processing as <xsl:param>
.
Note that it is not supported to extract an Endpoint Parameter in the value attribute - ${foo}
etc. will not work.
In the process of developing the data-source-transformation XSLT, it might be useful to get the transformation input xml the XSLT will be applied on.
In order to get the transformation input xml, simply omit the xslt (and all other options) from the transformer to get the raw transformation-input-xml.
In addition to parameters there are Intermediate Values . These are like parameters, but they do not come from the user’s request, instead they come from other tasks.
For example, imagine a CRM which requires 2 separate calls to
fetch the next available id to insert a new customerinsert a new customer, with that id (from irst request) as a mandatory parameter
insert a new customer, with that id (from irst request) as a mandatory parameter
In this case the first <task> will fetch the id, and make it available as an input to the second task. This is an “intermediate value”.
Intermediate values can be referenced as ${value} just like normal parameters.
Intermediate values may not have the same name as a parameter declared in “endpoints.xml”. (If intermediate values could have the same names, then ${value} could be ambiguous.)
Tasks must explicitly specify which intermediate values they output and which they input. Any task may accept any input intermediate value, however, the output of an intermediate value is task-specific. (For example, HTTP Tasks parse the response, but there is no useful way for an email task to output a variable.)
A task which outputs intermediate values may not be optional (with if and equals attributes). That is because the output variables will be used by other tasks, therefore the task must always run.
For example:
To produce an <output-intermediate-value> from a response-body, one of the following syntaxes must be used:
The former requires that the result of the response be XML, the latter that it be JSON. No attempt is made to convert the response between XML and JSON. The regex attribute is optional.
ssIn case you want to use the intermediate value within a regular data-source-xslt, then you need to declare it within the <success> tags:
Originally XSLT specifically was designed to support a 2-step process of
Transformation: In the first step an XML document transforms into another XML document containing information about how to display the document: what font to use, the size of a page, etc. This markup is called Formatting Objects
. Note that the resulting XML document not only contains formatting information, but is also stores all of the document's data within itself.
Formatting: Some software (called “FO-Processor”) transforms the result of the first step (transformation) into the intended output format. For example, Apache™ FOP (Formatting Objects Processor) is an output independent formatter, which can generate PDF.
OpenEndpoints uses Apache™ FOP to generate PDF.
The data-source transformer refers to a XSLT that creates XSL-FO (=an xml containing content + formatting options).
In the example above the 2-step process is:
[xslt-which-creates-xsl-fo]
transforms [data-source]
into the xsl-fo-output
= an xml containing content + formatting options.
The option <convert-output-xsl-fo-to-pdf/>
triggers the function to send xsl-fo-output
to Apache FOP, which will generate PDF.
The Apache FOP integration with OpenEndpoints includes easy to use options to
Embedding Images into the generated PDF
Embedding Fonts such as Google Fonts into your generated PDF
XSL-FO is a very mature standard for page composition, which was designed for paged media. It is capable of comprehensive layout functionality, which makes it possible to create error-free but also beautiful layouts. For example: Pagination controls to avoid “widows” and “orphans”, the support of multiple columns, indexing, etc. It is the perfect technology to produce “content-driven” design.
For more advantages and disadvantages of XSL-FO see: https://en.wikipedia.org/wiki/XSL_Formatting_Objects
Offer-Ready uses multiple cores, if available. If not specified otherwise, tasks are executed in parallel and will be finished in an arbitrary order.
Paralell execution of primary action and tasks
Note that - as a default - the primary task (data transformation) and all tasks are executed in parallel.
Therefore if one HTTP request depends on another previous one having completed first, that will not work without declaring this dependency.
You may optionally assign an id attribute to the task element:
You may insert an element <after task-id="..."/> into any task that needs to be executed after foo…
Note that on using Intermediate Values the software will automatically determine the order of execution such that intermediate outputs are created before they are required as inputs. Intermediate values and “after” elements can be used in parallel.
Depending on what fonts readers have available on their local computer, fonts used in your PDF will be seen as you intended it to be seen, or the computer may substitute fonts. Substitution can result in significant differences between your intended output and what the reader will see.
Embedding fonts prevents font substitution. This ensures that text is displayed and printed in its original font.
Offer-Ready supports embedding of True Type fonts, which have a “ttf” file extension.
Create a directory “fonts” in the root directory of your configuration. Within this “fonts” directory, there is
an overall fonts configuration file called “apache-fop-config.xml”.
A TTF file for each font refered to in the apache-fop-config.xml. For example: “HelveticaNeue-Light.ttf”.
The “apache-fop-config.xml” declares the ttf fonts in the fop processor:
On developing your XSL-FO template, use the (embedded) font-triplet name as if it were available on your local computer. Technically your XSL-FO file may contain any font-name with any font-style and font-weight. The behaviour of a common PDF reader will be:
If a font-name/font-style/font-weight triplet used in the PDF is available as an embedded font-triplet, than this will be used.
If a font-name/font-style/font-weight triplet used in the PDF is not available as an embedded font-triplet, but on the local computer, than the font from the local computer will be used.
If both previous options do not apply, the PDF reader will substitute that font by some other font.
Beware of these “traps”:
If your PDF for example does not show a text in bold letters, although your intention was to use bold letters, than you maybe have defined the font-triplet for “weight=normal”, but not also for “weight=bold”. You need to declare all required combinations of name/style/weight.
If your local computer has a font “Helvetica Neue”, and your font-triplet name is “HelveticaNeue” (without a blank), than you must use “HelveticaNeue” in your XSL-FO. A WYSIWYG editor (like Altova Stylevision) likely will offer local fonts only in a dropdown.
The preview of your PDF in a local development environment may look different than the PDF generated by Offer-Ready. The fonts used by the local development environment will use fonts on your computer, and not fonts checked in in the above structure.
Using a font may require a licence
It is your responsibility to respect license constraints. While many fonts can be used without restrictions, other fonts for example might require to pay a license fee.